Beschreibung
With the availability of affordable range imaging sensors, providing real-time three-dimensional information of the captured scene, new types of Computer Vision applications arise. Such applications range from designing new Human-Computer interfaces (known as Natural User Interfaces) to the generation of highly detailed reconstructions of complex scenes, to autonomous driving and augmented reality. These depth sensors are mostly based on two efficient technologies: the structured-light principle and the time-of-flight (ToF) principle.
This dissertation is composed of 4 contributions. First, an efficient approach is proposed to compensate motion artifact of ToF raw images. Thereafter, a work on online three-dimensional reconstruction application has been investigated to improve the robustness of the camera tracker by segmenting moving objects. The third contribution lies on a robust handling of noise on raw data, during the full reconstruction pipeline, proposing a new type of information fusion which considered the anisotropic nature of noise present on depth data, leading to faster convergence of high-quality reconstructions. Finally, a new method has been designed which uses surface curvature information to robustly reconstruct fine structures of small objects, as well as limiting the total error of camera drift.