Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Accurate 3D tracking using visual and depth data
Download
index.pdf
Date
2014
Author
Gedik, Osman Serdar
Metadata
Show full item record
Item Usage Stats
190
views
155
downloads
Cite This
3D tracking of objects is essential in many applications, such as robotics and aug- mented reality (AR), and availability of accurate pose estimates increases reliability in robotic applications whereas decreases jitter in AR scenarios. As a result of the recent advances in the sensor technology, it is possible to capture synchronous high frame rate RGB and depth data. With this motivation, an automated and high accurate 3D tracking algorithm based on simultaneous utilization of visual and depth sensors is presented. The depth sensor data is utilized both in raw format and in the form of Shape Index Map (SIM), after the observation that the latter transformation empha- sizes structural details and provides a proper basis to jointly utilize both sensor data. As the object model, the initial object colored point cloud is utilized, which elimi- nates dependency on any offline generated Computer Aided Design (CAD) models that might limit the application areas. A typical 3D tracking algorithm composes of the following stages: Feature selection, feature association between consecutive frames and 3D pose estimation by the feature correspondences. Since the main aim is to perform highly accurate 3D tracking of any user selected object, data from both sensors is exploited in every stage of the process for improving accuracy, as well as robustness. First of all, a novel feature selection method, which localizes features with high textural and spatial cornerness properties, is proposed. In this method, in order to increase the spatial spread of features around the object, the region of interest is divided into regular grids. Within each grid a single feature with maximum cornerness measure in terms of both intensity and SIM data is selected. Imposing spatial-textural constraints jointly selects more discrimina- tive features, whereas a regular grid-based approach decreases bias on pose estimates. Then, the selected features are associated between consecutive frames by a new fea- ture tracking approach, which tracks each feature independently and simultaneously on intensity and SIM data for improving 3D tracking performance. The method de- cides on the final feature association based on the reliabilities of individual trackers estimated online. Such a parallel approach is observed to increase robustness against sensor noise and individual tracker failures. Finally, RGB and depth measurements of localized features are fused in a well-known Extended Kalman Filter (EKF) frame- work. In this framework, we propose a novel measurement weighting scheme, based on the manipulation of Kalman gain term, which favors high quality features and pro- vides robustness against measurement errors. This scheme, establishing a connection between computer vision and Bayes filtering disciplines, eliminates sole dependency on predefined sensor noise parameters and identical measurement noise assumption. The increase in 3D tracing accuracy due to each proposed sub-system is shown via experimental results. Furthermore, the accuracy of the proposed 3D tracking method is tested against a number of well-known techniques from the literature and supe- rior performance is observed against such approaches. Finally, the resulting pose estimates of the proposed algorithm is utilized to obtain 3D maps after combining colored point clouds at consecutive time instants. We observe that, although loop clo- sure or post-processing algorithms are not exploited, significant number of 3D point clouds are combined with a quite high accuracy.
Subject Keywords
Three-dimensional imaging.
,
Multisensor data fusion.
,
Tracking (Engineering).
,
Robotics.
URI
http://etd.lib.metu.edu.tr/upload/12618071/index.pdf
https://hdl.handle.net/11511/23955
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
Fusing 2D and 3D Clues for 3D Tracking Using Visual and Range Data
Gedik, O. Serdar; Alatan, Abdullah Aydın (2013-07-12)
3D tracking of rigid objects is required in many applications, such as robotics or augmented reality (AR). The availability of accurate pose estimates increases reliability in robotic applications and decreases jitter in AR scenarios. Pure vision-based 3D trackers require either manual initializations or offline training stages, whereas trackers relying on pure depth sensors are not suitable for AR applications. In this paper, an automated 3D tracking algorithm, which is based on fusion of vision and depth ...
UNCERTAINTY MODELING FOR EFFICIENT VISUAL ODOMETRY VIA INERTIAL SENSORS ON MOBILE DEVICES
AKSOY, Yagiz; Alatan, Abdullah Aydın (2014-10-30)
Most of the mobile applications require efficient and precise computation of the device pose, and almost every mobile device has inertial sensors already equipped together with a camera. This fact makes sensor fusion quite attractive for increasing efficiency during pose tracking. However, the state-of-the-art fusion algorithms have a major shortcoming: lack of well-defined uncertainty introduced to the system during the prediction stage of the fusion filters. Such a drawback results in determining covarian...
Case studies on the use of neural networks in eutrophication modeling
Karul, C; Soyupak, S; Cilesiz, AF; Akbay, N; Germen, E (2000-10-30)
Artificial neural networks are becoming more and more common to be used in development of prediction models for complex systems as the theory behind them develops and the processing power of computers increase. A three layer Levenberg-Marquardt feedforward learning algorithm was used to model the eutrophication process in three water bodies of Turkey (Keban Dam Reservoir, Mogan and Eymir Lakes). Despite the very complex and peculiar nature of Keban Dam, a relatively good correlation (correlation coefficient...
Occlusion-aware 3-D multiple object tracking for visual surveillance
Topçu, Osman; Alatan, Abdullah Aydın; Ercan, Ali Özer; Department of Electrical and Electronics Engineering (2013)
This thesis work presents an occlusion-aware particle filter framework for online tracking of multiple people with observations from multiple cameras with overlapping fields of view for surveillance applications. Surveillance problem involves inferring motives of people from their actions, deduced from their trajectories. Visual tracking is required to obtain these trajectories and it is a challenging problem due to motion model variations, size and illumination changes and especially occlusions between mov...
3-D Rigid Body Tracking Using Vision and Depth Sensors
Gedik, O. Serdar; Alatan, Abdullah Aydın (Institute of Electrical and Electronics Engineers (IEEE), 2013-10-01)
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm,...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. S. Gedik, “Accurate 3D tracking using visual and depth data,” Ph.D. - Doctoral Program, Middle East Technical University, 2014.