Show/Hide Menu
Hide/Show Apps
anonymousUser
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Açık Bilim Politikası
Açık Bilim Politikası
Frequently Asked Questions
Frequently Asked Questions
Browse
Browse
By Issue Date
By Issue Date
Authors
Authors
Titles
Titles
Subjects
Subjects
Communities & Collections
Communities & Collections
Visual Object Tracking with Autoencoder Representations
Date
2016-05-19
Author
Besbinar, Beril
Alatan, Abdullah Aydın
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
2
views
0
downloads
Deep learning is the discipline of training computational models that are composed of multiple layers and these methods have recently improved the state of the art in many areas as a virtue of large labeled datasets, increase in the computational power of current hardware and unsupervised training methods. Although such a dataset may not be available for lots of application areas, the representations obtained by the well-designed networks that have a large representation capacity and trained with enough data are claimed to have the ability to generalize for transfer learning. As an example application, in this work, we investigate the use of stacked autoencoders for visual object tracking, which is a challenging yet very important task in computer vision. Training of autoencoders is achieved via an auxiliary dataset and the resultant representations are utilized within the tracking-by-detection framework. Experiments, realized using a challenge toolkit, indicate that exploiting the intricate structure in auxiliary dataset via hierarchical representations contributes to the solution of visual object tracking problem.
Subject Keywords
Visual Object Tracking
,
Autoencoders
,
Deep Learning
,
Particle Filter
URI
https://hdl.handle.net/11511/55799
Collections
Department of Electrical and Electronics Engineering, Conference / Seminar