Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
3-D Rigid Body Tracking Using Vision and Depth Sensors
Date
2013-10-01
Author
Gedik, O. Serdar
Alatan, Abdullah Aydın
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
199
views
0
downloads
Cite This
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
Subject Keywords
Control and Systems Engineering
,
Human-Computer Interaction
,
Electrical and Electronic Engineering
,
Software
,
Information Systems
,
Computer Science Applications
URI
https://hdl.handle.net/11511/39482
Journal
IEEE TRANSACTIONS ON CYBERNETICS
DOI
https://doi.org/10.1109/tcyb.2013.2272735
Collections
Department of Electrical and Electronics Engineering, Article
Suggestions
OpenMETU
Core
Dynamic modeling and parameter estimation for traction, rolling, and lateral wheel forces to enhance mobile robot trajectory tracking
BAYAR, Gokhan; Koku, Ahmet Buğra; Konukseven, Erhan İlhan (Cambridge University Press (CUP), 2015-12-01)
Studying wheel and ground interaction during motion has the potential to increase the performance of localization, navigation, and trajectory tracking control of a mobile robot. In this paper, a differential mobile robot is modeled in a way that (traction, rolling, and lateral) wheel forces are included in the overall system dynamics. Lateral wheel forces are included in the mathematical model together with traction and rolling forces. A least square parameter estimation process is proposed to estimate the ...
Fusing 2D and 3D Clues for 3D Tracking Using Visual and Range Data
Gedik, O. Serdar; Alatan, Abdullah Aydın (2013-07-12)
3D tracking of rigid objects is required in many applications, such as robotics or augmented reality (AR). The availability of accurate pose estimates increases reliability in robotic applications and decreases jitter in AR scenarios. Pure vision-based 3D trackers require either manual initializations or offline training stages, whereas trackers relying on pure depth sensors are not suitable for AR applications. In this paper, an automated 3D tracking algorithm, which is based on fusion of vision and depth ...
Active stereo vision : depth perception for navigation, environmental map formation and object recognition
Ulusoy, İlkay; Halıcı, Uğur; Department of Electrical and Electronics Engineering (2003)
In very few mobile robotic applications stereo vision based navigation and mapping is used because dealing with stereo images is very hard and very time consuming. Despite all the problems, stereo vision still becomes one of the most important resources of knowing the world for a mobile robot because imaging provides much more information than most other sensors. Real robotic applications are very complicated because besides the problems of finding how the robot should behave to complete the task at hand, t...
2D-3D feature association via projective transform invariants for model-based 3D pose estimation
Gedik, O. Serdar; Alatan, Abdullah Aydın (2012-01-26)
The three dimensional (3D) tracking of rigid objects is required in many applications, such as 3D television (3DTV) and augmented reality. Accurate and robust pose estimates enable improved structure reconstructions for 3DTV and reduce jitter in augmented reality scenarios. On the other hand, reliable 2D-3D feature association is one of the most crucial requirements for obtaining high quality 3D pose estimates. In this paper, a 2D-3D registration method, which is based on projective transform invariants, is...
COSMO: Contextualized scene modeling with Boltzmann Machines
Bozcan, Ilker; Kalkan, Sinan (Elsevier BV, 2019-03-01)
Scene modeling is very crucial for robots that need to perceive, reason about and manipulate the objects in their environments. In this paper, we adapt and extend Boltzmann Machines (BMs) for contextualized scene modeling. Although there are many models on the subject, ours is the first to bring together objects, relations, and affordances in a highly-capable generative model. For this end, we introduce a hybrid version of BMs where relations and affordances are incorporated with shared, tri-way connections...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. S. Gedik and A. A. Alatan, “3-D Rigid Body Tracking Using Vision and Depth Sensors,”
IEEE TRANSACTIONS ON CYBERNETICS
, pp. 1395–1405, 2013, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/39482.