3-D Rigid Body Tracking Using Vision and Depth Sensors

2013-10-01
Gedik, O. Serdar
Alatan, Abdullah Aydın
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
IEEE TRANSACTIONS ON CYBERNETICS

Suggestions

Dynamic modeling and parameter estimation for traction, rolling, and lateral wheel forces to enhance mobile robot trajectory tracking
BAYAR, Gokhan; Koku, Ahmet Buğra; Konukseven, Erhan İlhan (Cambridge University Press (CUP), 2015-12-01)
Studying wheel and ground interaction during motion has the potential to increase the performance of localization, navigation, and trajectory tracking control of a mobile robot. In this paper, a differential mobile robot is modeled in a way that (traction, rolling, and lateral) wheel forces are included in the overall system dynamics. Lateral wheel forces are included in the mathematical model together with traction and rolling forces. A least square parameter estimation process is proposed to estimate the ...
Fusing 2D and 3D Clues for 3D Tracking Using Visual and Range Data
Gedik, O. Serdar; Alatan, Abdullah Aydın (2013-07-12)
3D tracking of rigid objects is required in many applications, such as robotics or augmented reality (AR). The availability of accurate pose estimates increases reliability in robotic applications and decreases jitter in AR scenarios. Pure vision-based 3D trackers require either manual initializations or offline training stages, whereas trackers relying on pure depth sensors are not suitable for AR applications. In this paper, an automated 3D tracking algorithm, which is based on fusion of vision and depth ...
Active stereo vision : depth perception for navigation, environmental map formation and object recognition
Ulusoy, İlkay; Halıcı, Uğur; Department of Electrical and Electronics Engineering (2003)
In very few mobile robotic applications stereo vision based navigation and mapping is used because dealing with stereo images is very hard and very time consuming. Despite all the problems, stereo vision still becomes one of the most important resources of knowing the world for a mobile robot because imaging provides much more information than most other sensors. Real robotic applications are very complicated because besides the problems of finding how the robot should behave to complete the task at hand, t...
2D-3D feature association via projective transform invariants for model-based 3D pose estimation
Gedik, O. Serdar; Alatan, Abdullah Aydın (2012-01-26)
The three dimensional (3D) tracking of rigid objects is required in many applications, such as 3D television (3DTV) and augmented reality. Accurate and robust pose estimates enable improved structure reconstructions for 3DTV and reduce jitter in augmented reality scenarios. On the other hand, reliable 2D-3D feature association is one of the most crucial requirements for obtaining high quality 3D pose estimates. In this paper, a 2D-3D registration method, which is based on projective transform invariants, is...
COSMO: Contextualized scene modeling with Boltzmann Machines
Bozcan, Ilker; Kalkan, Sinan (Elsevier BV, 2019-03-01)
Scene modeling is very crucial for robots that need to perceive, reason about and manipulate the objects in their environments. In this paper, we adapt and extend Boltzmann Machines (BMs) for contextualized scene modeling. Although there are many models on the subject, ours is the first to bring together objects, relations, and affordances in a highly-capable generative model. For this end, we introduce a hybrid version of BMs where relations and affordances are incorporated with shared, tri-way connections...
Citation Formats
O. S. Gedik and A. A. Alatan, “3-D Rigid Body Tracking Using Vision and Depth Sensors,” IEEE TRANSACTIONS ON CYBERNETICS, pp. 1395–1405, 2013, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/39482.