Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
UNCERTAINTY MODELING FOR EFFICIENT VISUAL ODOMETRY VIA INERTIAL SENSORS ON MOBILE DEVICES
Date
2014-10-30
Author
AKSOY, Yagiz
Alatan, Abdullah Aydın
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
235
views
0
downloads
Cite This
Most of the mobile applications require efficient and precise computation of the device pose, and almost every mobile device has inertial sensors already equipped together with a camera. This fact makes sensor fusion quite attractive for increasing efficiency during pose tracking. However, the state-of-the-art fusion algorithms have a major shortcoming: lack of well-defined uncertainty introduced to the system during the prediction stage of the fusion filters. Such a drawback results in determining covariances heuristically, and hence, requirement for data-dependent tuning to achieve high performance or even convergence of these filters. In this paper, we propose an inertially-aided visual odometry system that requires neither heuristics nor parameter tuning; computation of the required uncertainties on all the estimated variables are obtained after minimum number of assumptions. Moreover, the proposed system simultaneously estimates the metric scale of the pose computed from a monocular image stream. The experimental results indicate that the proposed scale estimation outperforms the state-of-the-art methods, whereas the pose estimation step yields quite acceptable results in real-time on resource constrained systems.
Subject Keywords
Sensor fusion
,
Inertial sensors
,
Visual odometry
,
Pose tracking
,
Mobile vision
URI
https://hdl.handle.net/11511/48249
DOI
https://doi.org/10.1109/icip.2014.7025687
Collections
Department of Electrical and Electronics Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Fusing 2D and 3D Clues for 3D Tracking Using Visual and Range Data
Gedik, O. Serdar; Alatan, Abdullah Aydın (2013-07-12)
3D tracking of rigid objects is required in many applications, such as robotics or augmented reality (AR). The availability of accurate pose estimates increases reliability in robotic applications and decreases jitter in AR scenarios. Pure vision-based 3D trackers require either manual initializations or offline training stages, whereas trackers relying on pure depth sensors are not suitable for AR applications. In this paper, an automated 3D tracking algorithm, which is based on fusion of vision and depth ...
Fuzzy Decision Fusion for Single Target Classification in Wireless Sensor Networks
Gok, Sercan; Yazıcı, Adnan; Coşar, Ahmet; George, Roy (2010-07-23)
With the advances in technology, low cost and low footprint sensors are being used more and more commonly. Especially for military applications wireless sensor networks (WSN) have become an attractive solution as they have great use for avoiding deadly danger in combat. For military applications, classification of a target in a battlefield plays an important role. A wireless sensor node has the ability to sense the raw signal data in battlefield, extract the feature vectors from sensed signal and produce a ...
Accurate 3D tracking using visual and depth data
Gedik, Osman Serdar; Alatan, Abdullah Aydın; Department of Electrical and Electronics Engineering (2014)
3D tracking of objects is essential in many applications, such as robotics and aug- mented reality (AR), and availability of accurate pose estimates increases reliability in robotic applications whereas decreases jitter in AR scenarios. As a result of the recent advances in the sensor technology, it is possible to capture synchronous high frame rate RGB and depth data. With this motivation, an automated and high accurate 3D tracking algorithm based on simultaneous utilization of visual and depth sensors is ...
Differential-linear cryptanalysis of ascon and drygascon
Civek, Aslı Başak; Tezcan, Cihangir; Department of Cybersecurity (2021-6)
Due to rapidly developing technology, devices have become smaller along with their performance capacity and memory. If possible, existing NIST-approved encryption standards should be used on these resource-constrained devices. When an acceptable performance cannot be achieved in this way, there is a need for more lightweight algorithms. Since taking individual measures leads to simplistic designs when designing lightweight algorithms, ciphers can become more vulnerable to cryptographic attacks. Hence some r...
Efficient inertially aided visual odometry towards mobile augmented reality
Aksoy, Yağız; Alatan, Abdullah Aydın; Department of Electrical and Electronics Engineering (2013)
With the increase in the number and computational power of commercial mobile devices like smart phones and tablet computers, augmented reality applications are gaining more and more volume. In order to augment virtual objects effectively in real scenes, pose of the camera should be estimated with high precision and speed. Today, most of the mobile devices feature cameras and inertial measurement units which carry information on change in position and attitude of the camera. In this thesis, utilization of in...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
Y. AKSOY and A. A. Alatan, “UNCERTAINTY MODELING FOR EFFICIENT VISUAL ODOMETRY VIA INERTIAL SENSORS ON MOBILE DEVICES,” 2014, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/48249.