Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Automatic Point Matching and Robust Fundamental Matrix Estimation for Hybrid Camera Scenarios
Date
2009-04-11
Author
Bastanlar, Yalin
Temizel, Alptekin
Yardimci, Yasemin
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
165
views
0
downloads
Cite This
In this paper, we propose a method to estimate the fundamental matrix for hybrid cameras robustly. In our study a catadioptric omnidirectional camera and a perspective camera were used to obtain hybrid image pairs. For automatic feature point matching, we employed Scale Invariant Feature Transform (SIFT) and improved matching results with the proposed image preprocessing. We also performed matching using virtual camera plane (VCP) images, which are unwarped from the omnidirectional image and carries perspective image properties. Although both approaches are able to produce succesful results, we observed that VCP-perspective matching is more robust to increasing baseline when compared to direct omnidirectional-perspective matching. We implemented RANSAC based on the hybrid epipolar geometry which enables robust estimation of the fundamental matrix as well as elimination of false matches.
URI
https://hdl.handle.net/11511/54402
Collections
Graduate School of Informatics, Conference / Seminar
Suggestions
OpenMETU
Core
An iterative adaptive multi-modal stereo-vision method using mutual information
Yaman, Mustafa; Kalkan, Sinan (2015-01-01)
We propose a method for computing disparity maps from a multi-modal stereo-vision system composed of an infrared-visible camera pair. The method uses mutual information (MI) as the basic similarity measure where a segment-based adaptive windowing mechanism is proposed along with a novel MI computation surface with joint prior probabilities incorporated. The computed cost confidences are aggregated using a novel adaptive cost aggregation method, and the resultant minimum cost disparities in segments are plan...
Robust Automatic Target Recognition in FLIR imagery
Soyman, Yusuf (2012-04-24)
In this paper, a robust automatic target recognition algorithm in FLIR imagery is proposed. Target is first segmented out from the background using wavelet transform. Segmentation process is accomplished by parametric Gabor wavelet transformation. Invariant features that belong to the target, which is segmented out from the background, are then extracted via moments. Higher-order moments, while providing better quality for identifying the image, are more sensitive to noise. A trade-off study is then perform...
Sensor Fusion of a Camera and 2D LIDAR for Lane Detection
Schmidt, Klaus Verner (null; 2019-04-26)
This paper presents a novel lane detection algorithm based on fusion of camera and 2D LIDAR data. On the one hand, objects on the road are detected via 2D LIDAR. On the other hand, binary bird’s eye view (BEV) images are acquired from the camera data and the locations of objects detected by LIDAR are estimated on the BEV image. In order to remove the noise generated by objects on the BEV, a modified BEV image is obtained, where pixels occluded by the detected objects are turned into background pixels. Then,...
Alignment of uncalibrated images for multi-view classification
Arık, Sercan Ömer; Vural, Elif; Frossard, Pascal (2011-12-29)
Efficient solutions for the classification of multi-view images can be built on graph-based algorithms when little information is known about the scene or cameras. Such methods typically require a pairwise similarity measure between images, where a common choice is the Euclidean distance. However, the accuracy of the Euclidean distance as a similarity measure is restricted to cases where images are captured from nearby viewpoints. In settings with large transformations and viewpoint changes, alignment of im...
Efficient Computation of Green's Functions for Multilayer Media in the Context of 5G Applications
Mittra, Raj; Özgün, Özlem; Li, Chao; Kuzuoğlu, Mustafa (2021-03-22)
This paper presents a novel method for effective computation of Sommerfeld integrals which arise in problems involving antennas or scatterers embedded in planar multilayered media. Sommerfeld integrals that need to be computed in the evaluation of spatial-domain Green's functions are often highly oscillatory and slowly decaying. For this reason, standard numerical integration methods are not efficient for such integrals, especially at millimeter waves. The main motivation of the proposed method is to comput...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
Y. Bastanlar, A. Temizel, and Y. Yardimci, “Automatic Point Matching and Robust Fundamental Matrix Estimation for Hybrid Camera Scenarios,” 2009, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/54402.