Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
RGBD Data Based Pose Estimation Why Sensor Fusion
Date
2015-07-09
Author
GEDİK, OSMAN SERDAR
Alatan, Abdullah Aydın
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
127
views
0
downloads
Cite This
Performing high accurate pose estimation has been an attractive research area in the field of computer vision; hence, there are a plenty of algorithms proposed for this purpose. Starting with RGB or gray scale image data, methods utilizing data from 3D sensors, such as Time of Flight (TOF) or laser range finder, and later those based on RGBD data have emerged chronologically. Algorithms that exploit image data mainly rely on minimization of image plane error, i.e. the reprojection error. On the other hand, methods utilizing 3D measurements from depth sensors estimate object pose in order to minimize the Euclidean distance between these measurements. However, although errors in associated domains can be minimized effectively by such methods, the resultant pose estimates may not be of sufficient accuracy, when the dynamics of the object motion is ignored. At this point, the proposed 3D rigid pose estimation algorithm fuses measurements from vision (RGB) and depth sensors in a probabilistic manner using Extended Kalman Filter (EKF). It is shown that such a procedure increases pose estimation performance significantly compared to single sensor approaches.
Subject Keywords
Tracking
,
Vision
URI
https://hdl.handle.net/11511/53912
Collections
Department of Electrical and Electronics Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Motion estimation using complex discrete wavelet transform
Sarı, Hüseyin; Severcan, Mete; Department of Electrical and Electronics Engineering (2003)
The estimation of optical flow has become a vital research field in image sequence analysis especially in past two decades, which found applications in many fields such as stereo optics, video compression, robotics and computer vision. In this thesis, the complex wavelet based algorithm for the estimation of optical flow developed by Magarey and Kingsbury is implemented and investigated. The algorithm is based on a complex version of the discrete wavelet transform (CDWT), which analyzes an image through blo...
Visual Saliency Estimation via Attribute Based Classifiers and Conditional Random Field
Demirel, Berkan; Cinbiş, Ramazan Gökberk; İKİZLER CİNBİŞ, NAZLI (2016-05-19)
Visual Saliency Estimation is a computer vision problem that aims to find the regions of interest that are frequently in eye focus in a scene or an image. Since most computer vision problems require discarding irrelevant regions in a scene, visual saliency estimation can be used as a preprocessing step in such problems. In this work, we propose a method to solve top-down saliency estimation problem using Attribute Based Classifiers and Conditional Random Fields (CRF). Experimental results show that attribut...
3D face modeling using multiple images
BUYUKATALAY, SONER; Halıcı, Uğur; AKAGUNDUZ, ERDEM; ULUSOY PARNAS, İLKAY (2006-04-19)
3D face modeling based on real images is one of the important subject of Computer Vision that is studied recently. In this paper the study that eve contucted in our Computer Vision and Intelligent Systems Research Laboratory on 3D face model generation using uncalibrated multiple still images is explained.
Performance evaluation of saliency map methods on remotely sensed RGB images
Sönmez, Selen; Halıcı, Uğur; Department of Geodetic and Geographical Information Technologies (2016)
Predictive applications of human eye visualization so called saliency map computational models become more attractive in image processing studies. Saliency map highlights regions that are distinctive from their surrounding in the images in interest. In this study, various computational models for salient region detection are investigated on remotely sensed images. The computational methods considered are Itti-Koch, Graph-Based Visual Saliency, Saliency Detection by Combining Simple Priors, Frequency-tuned S...
Design and development of a game based eye training program for children with low vision
Dönmez, Mehmet; Çağıltay, Kürşat; Department of Computer Education and Instructional Technology (2020)
This study explores the design principles of eye movement-based computer game applications as training material for children with low vision to enhance their vision skills. It aims to provide children with interactive materials to improve their vision. For the study, design-based research was employed in four phases, namely analysis, development, evaluation and testing, and documentation and reflection. In the analysis phase, a focus group meeting and interviews were conducted with experts from the field of...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. S. GEDİK and A. A. Alatan, “RGBD Data Based Pose Estimation Why Sensor Fusion,” 2015, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/53912.