Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Semi-Automatic Annotation For Visual Object Tracking
Date
2021-11-24
Author
Köksal, Aybora
Alatan, Abdullah Aydın
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
238
views
0
downloads
Cite This
We propose a semi-automatic bounding box annotation method for visual object tracking by utilizing temporal information with a tracking-by-detection approach. For detection, we use an off-the-shelf object detector which is trained iteratively with the annotations generated by the proposed method, and we perform object detection on each frame independently. We employ Multiple Hypothesis Tracking (MHT) to exploit temporal information and to reduce the number of false-positives which makes it possible to use lower objectness thresholds for detection to increase recall. The tracklets formed by MHT are evaluated by human operators to enlarge the training set. This novel incremental learning approach helps to perform annotation iteratively. The experiments performed on AUTH Multidrone Dataset reveal that the annotation workload can be reduced up to 96% by the proposed approach. Resulting uav detection 2 annotations and our codes are publicly available at github.com/aybora/Semi-AutomaticVideo-Annotation-OGAM.
URI
http://dx.doi.org/10.1109/iccvw54120.2021.00143
https://hdl.handle.net/11511/94844
DOI
https://doi.org/10.1109/iccvw54120.2021.00143
Conference Name
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
Collections
Department of Electrical and Electronics Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Semi-supervised generative guidance for zero-shot semantic segmentation
Önem, Abdullah Cem; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2022-1)
Collecting fully-annotated data to train deep networks for semantic image segmentation can be prohibitively costly due to difficulty of making pixel-by-pixel annotations. In this context, zero-shot learning based formulations relax the labelled data requirements by enabling the recognition of classes without training examples. Recent studies on zero-shot learning of semantic segmentation models, however, highlight the difficulty of the problem. This thesis proposes techniques towards improving zero-shot gen...
Visual object tracking using semi supervised convolutional filters
Sevindik, Emir Can; Alatan, Abdullah Aydın; Department of Electrical and Electronics Engineering (2020-10-15)
Visual object tracking aims to find a single object position in a video frame, when a annotated bounding box is provided in the first frame. Correlation filters have always produced excellent results in terms of accuracy, while enjoying quite low computational complexity. The main property of correlation filter based trackers is to find a filter that can generate high values around the true target object location, whereas relatively low values for the locations away from the object. Recently, deep learn...
Real-Time Moving Target Search
Undeger, Cagatay; Polat, Faruk (2007-11-23)
In this paper, we propose a real-time moving target search algorithm for dynamic and partially observable environments, modeled as grid world. The proposed algorithm, Real-time Moving Target Evaluation Search (MTES), is able to detect the closed directions around the agent, and determine the best direction that avoids the nearby obstacles, leading to a moving target which is assumed to be escaping almost optimally. We compared our proposal with Moving Target Search (NITS) and observed a significant improvem...
Extended Target Tracking Using Polynomials With Applications to Road-Map Estimation
Lundquist, Christian; Orguner, Umut; Gustafsson, Fredrik (Institute of Electrical and Electronics Engineers (IEEE), 2011-01-01)
This paper presents an extended target tracking framework which uses polynomials in order to model extended objects in the scene of interest from imagery sensor data. State-space models are proposed for the extended objects which enables the use of Kalman filters in tracking. Different methodologies of designing measurement equations are investigated. A general target tracking algorithm that utilizes a specific data association method for the extended targets is presented. The overall algorithm must always ...
FISHER SELECTIVE SEARCH FOR OBJECT DETECTION
BUZCU, ILKER; Alatan, Abdullah Aydın (2016-09-28)
An enhancement to one of the existing visual object detection approaches is proposed for generating candidate windows that improves detection accuracy at no additional computational cost. Hypothesis windows for object detection are obtained based on Fisher Vector representations over initially obtained superpixels. In order to obtain new window hypotheses, hierarchical merging of superpixel regions are applied, depending upon improvements on some objectiveness measures with no additional cost due to additiv...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
A. Köksal and A. A. Alatan, “Semi-Automatic Annotation For Visual Object Tracking,” presented at the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, Kanada, 2021, Accessed: 00, 2021. [Online]. Available: http://dx.doi.org/10.1109/iccvw54120.2021.00143.