概要: |
Recent improvements in resolution, accuracy, and cost have made depth cameras a very popular alternative for 3D reconstruction and navigation. Thus, accurate depth camera calibration a very relevant aspect of many 3D pipelines. We explore what are the limits of a practical depth camera calibration algorithm: how to accurately calibrate a noisy depth camera without a precise calibration object and without using brightness or depth discontinuities. We present an algorithm that uses an external color camera to overcome the difficulties presented by a depth camera and show that a joint calibration obtains the best results. We apply this algorithm to calibrating the Kinect.
Depth cameras, however, are not suitable for many navigation applications (e.g. outdoors or very large depth). We argue that a simultaneous localization and mapping system may use depth information when available, but the core algorithm should work with a monocular color camera. Many traditional SLAM algorithms, like PTAM, work on the strict assumption that all detected features have a known depth. Thus problems arise with scenes or motions where features show no parallax and no depth can be estimated. We present a formulation of keyframe-based SLAM that can use features with unknown depth to constrain the rotation of the camera and features with depth to constrain rotation and translation, thus effectively using all available features. We also release the code of our approach to the academic community.
|