Exploiting Depth Information from Tracked Feature Points in Dense Reconstruction for Monocular Camera

Qirui ZHang


A conventional dense 3D reconstruction method utilizing a monocular camera needs to uses input video stream to perform a 6DoF camera pose estimation first, and then uses it to recover depth for a reference frame simultaneously. In this paper we present a method which uses not only estimated camera pose but also abundant depth information of tracked feature points gained from camera tracking process. Based on photo consistency measurement in a cost volume manner, we use such information as supporting data to modify the observed cost data term to boost them into a better initialization and keep them compensated in the following optimization process.

In early experiments, we compare our method with the method using weighted prior smoothness regularization term in both simulated and real environments. We could achieve decreased error while reducing a certain pattern of defect.