Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
HoughNet: Integrating Near and Long-Range Evidence for Bottom-Up Object Detection
Download
index.pdf
Date
2020-01-01
Author
Samet, Nermin
Hicsonmez, Samet
Akbaş, Emre
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
246
views
87
downloads
Cite This
This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby generalizing and enhancing current object detection methodology, which typically relies on only local evidence. On the COCO dataset, HoughNet’s best model achieves 46.4 AP (and 65.1 AP50), performing on par with the state-of-the-art in bottom-up object detection and outperforming most major one-stage and two-stage methods. We further validate the effectiveness of our proposal in another task, namely, “labels to photo” image generation by integrating the voting module of HoughNet to two different GAN models and showing that the accuracy is significantly improved in both cases. Code is available at https://github.com/nerminsamet/houghnet.
Subject Keywords
Object detection
,
Voting
,
Bottom-up recognition
,
Hough transform
,
Image-to-image translation
URI
https://hdl.handle.net/11511/94066
DOI
https://doi.org/10.1007/978-3-030-58595-2_25
Conference Name
16th European Conference on Computer Vision, ECCV 2020
Collections
Department of Computer Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
HoughNet: Integrating Near and Long-Range Evidence for Visual Detection
Samet, Nermin; Hicsonmez, Samet; Akbaş, Emre (2022-1-01)
IEEEThis paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby...
Integrating near and long-range evidence for visual detection
Samet, Nermin; Akbaş, Emre; Department of Computer Engineering (2021-9)
This thesis presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby ge...
HYPERSPECTRAL UNMIXING BASED VEGETATION DETECTION WITH SEGMENTATION
Özdemir, Okan Bilge; Soydan, Hilal; Çetin, Yasemin; Duzgun, Sebnem (2016-07-15)
This paper presents a vegetation detection application with semi-supervised target detection using hyperspectral unmixing and segmentation algorithms. The method firstly compares the known target spectral signature from a generic source such as a spectral library with each pixel of hyperspectral data cube employing Spectral Angle Mapper (SAM) algorithm. The pixel(s) with the best match are assumed to be the most likely target vegetation locations. The regions around these potential target locations are furt...
Moving object detection with supervised learning methods
Köksal, Aybora; Alatan, Abdullah Aydın; İnce, Kutalmış Gökalp; Department of Electrical and Electronics Engineering (2021-9-7)
In this thesis, single target object detection problem is examined. Object detection is a problem that aims defining all of the objects of interest with their pre-defined classes in an image, or in a series of images. The main objective of this thesis is to exploit spatio-temporal information for performance enhancement during moving object detection. To this extent, modern object detection algorithms which are based on CNN architectures are analyzed. Based on this analysis, state-of-the-art techniques whic...
Segmentation Driven Object Detection with Fisher Vectors
Cinbiş, Ramazan Gökberk; Schmid, Cordelia (2013-01-01)
We present an object detection system based on the Fisher vector (FV) image representation computed over SIFT and color descriptors. For computational and storage efficiency, we use a recent segmentation-based method to generate class-independent object detection hypotheses, in combination with data compression techniques. Our main contribution is a method to produce tentative object segmentation masks to suppress background clutter in the features. Re-weighting the local image features based on these masks...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
N. Samet, S. Hicsonmez, and E. Akbaş, “HoughNet: Integrating Near and Long-Range Evidence for Bottom-Up Object Detection,” Glasgow, İngiltere, 2020, vol. 12370 LNCS, Accessed: 00, 2021. [Online]. Available: https://hdl.handle.net/11511/94066.