Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Visual-inertial sensor fusion for 3D urban modeling
Download
index.pdf
Date
2013
Author
Sırtkaya, Salim
Metadata
Show full item record
Item Usage Stats
223
views
222
downloads
Cite This
In this dissertation, a real-time, autonomous and geo-registered approach is presented to tackle the large scale 3D urban modeling problem using a camera and inertial sensors. The proposed approach exploits the special structures of urban areas and visual-inertial sensor fusion. The buildings in urban areas are assumed to have planar facades that are perpendicular to the local level. A sparse 3D point cloud of the imaged scene is obtained from visual feature matches using camera poses estimates, and planar patches are obtained by an iterative Hough Transform on the 2D projection of the sparse 3D point cloud in the direction of gravity. The result is a compact and dense depth map of the building facades in terms of planar patches. The plane extraction is performed on sequential frames and a complete model is obtained by plane fusion. Inertial sensor integration helps to improve camera pose estimation, 3D reconstruction and planar modeling stages. For camera pose estimation, the visual measurements are integrated with the inertial sensors by means of an indirect feedback Kalman filter. This integration helps to get reliable and geo-referenced camera pose estimates in the absence of GPS. The inertial sensors are also used to filter out spurious visual feature matches in the 3D reconstruction stage, find the direction of gravity in plane search stage, and eliminate out of scope objects from the model using elevation data. The visual-inertial sensor fusion and urban heuristics utilization are shown to outperform the classical approaches for large scale urban modeling in terms of consistency and real-time applicability.
Subject Keywords
Three-dimensional imaging.
,
Three-dimensional display systems.
,
Image processing
,
Imaging systems.
,
Image analysis.
,
Remote-sensing images.
URI
http://etd.lib.metu.edu.tr/upload/12616455/index.pdf
https://hdl.handle.net/11511/22887
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
Occlusion-aware 3-D multiple object tracking for visual surveillance
Topçu, Osman; Alatan, Abdullah Aydın; Ercan, Ali Özer; Department of Electrical and Electronics Engineering (2013)
This thesis work presents an occlusion-aware particle filter framework for online tracking of multiple people with observations from multiple cameras with overlapping fields of view for surveillance applications. Surveillance problem involves inferring motives of people from their actions, deduced from their trajectories. Visual tracking is required to obtain these trajectories and it is a challenging problem due to motion model variations, size and illumination changes and especially occlusions between mov...
A Shadow based trainable method for building detection in satellite images
Dikmen, Mehmet; Halıcı, Uğur; Department of Geodetic and Geographical Information Technologies (2014)
The purpose of this thesis is to develop a supervised building detection and extraction algorithm with a shadow based learning method for high-resolution satellite images. First, shadow segments are identified on an over-segmented image, and then neighboring shadow segments are merged by assuming that they are cast by a single building. Next, these shadow regions are used to detect the candidate regions where buildings most likely occur. Together with this information, distance to shadows towards illuminati...
Automated building detection from satellite images by using shadow information as an object invariant
Yüksel, Barış; Yarman Vural, Fatoş Tunay; Department of Computer Engineering (2012)
Apart from classical pattern recognition techniques applied for automated building detection in satellite images, a robust building detection methodology is proposed, where self-supervision data can be automatically extracted from the image by using shadow and its direction as an invariant for building object. In this methodology; first the vegetation, water and shadow regions are detected from a given satellite image and local directional fuzzy landscapes representing the existence of building are generate...
Photometric stereo considering highlights and shadows
Büyükatalay, Soner; Halıcı, Uğur; Birgül, Özlem; Department of Electrical and Electronics Engineering (2011)
Three dimensional (3D) shape reconstruction that aims to reconstruct 3D surface of objects using acquired images, is one of the main problems in computer vision. There are many applications of 3D shape reconstruction, from satellite imaging to material sciences, considering a continent on earth or microscopic surface properties of a material. One of these applications is the automated firearm identification that is an old, yet an unsolved problem in forensic science. Firearm evidence matching algorithms rel...
Moving hot object detection in airborne thermal videos
Kaba, Utku; Akar, Gözde; Department of Electrical and Electronics Engineering (2012)
In this thesis, we present an algorithm for vision based detection of moving objects observed by IR sensors on a moving platform. In addition we analyze the performance of different approaches in each step of the algorithm. The proposed algorithm is composed of preprocessing, feature detection, feature matching, homography estimation and difference image analysis steps. First, a global motion estimation based on planar homography model is performed in order to compensate the motion of the sensor and moving ...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Sırtkaya, “Visual-inertial sensor fusion for 3D urban modeling,” Ph.D. - Doctoral Program, Middle East Technical University, 2013.