Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Caption Generation on Scenes with Seen and Unseen Object Categories
Date
2022-06-01
Author
Demirel, Berkan
Cinbiş, Ramazan Gökberk
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
132
views
0
downloads
Cite This
URI
https://hdl.handle.net/11511/98040
Journal
IMAGE AND VISION COMPUTING
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
Caption generation on scenes with seen and unseen object categories
Demirel, Berkan; Cinbiş, Ramazan Gökberk (2022-08-01)
Image caption generation is one of the most challenging problems at the intersection of vision and language domains. In this work, we propose a realistic captioning task where the input scenes may incorporate visual objects with no corresponding visual or textual training examples. For this problem, we propose a detection-driven approach that consists of a single-stage generalized zero-shot detection model to recognize and localize instances of both seen and unseen classes, and a template-based captioning m...
Image Captioning with Unseen Objects
Berkan, Demirel; Cinbiş, Ramazan Gökberk; İkizler Cinbiş, Nazlı (2019-09-12)
Image caption generation is a long standing and challenging problem at the intersection of computer vision and natural language processing. A number of recently proposed approaches utilize a fully supervised object recognition model within the captioning approach. Such models, however, tend to generate sentences which only consist of objects predicted by the recognition models, excluding instances of the classes without labelled training examples. In this paper, we propose a new challenging scenario that ta...
Text Generation and Comprehension for Objects in Images and Videos
Anayurt Özyeğin, Hazan; Kalkan, Sinan; Department of Computer Engineering (2021-9-09)
Text generation from visual data is a problem often studied using deep learning, having a wide range of applications. This thesis focuses on two different aspects of this problem by proposing both supervised and unsupervised methods to solve it. In the first part of the thesis, we work on referring expression comprehension and generation from videos. We specifically work with relational referring expressions which we define to be expressions that describe an object with respect to another object. For this, ...
Text classification in Turkish marketing domain and context-sensitive ad distribution
Engin, Melih; Can, Tolga; Department of Computer Engineering (2009)
Online advertising has a continuously increasing popularity. Target audience of this new advertising method is huge. Additionally, there is another rapidly growing and crowded group related to internet advertising that consists of web publishers. Contextual advertising systems make it easier for publishers to present online ads on their web sites, since these online marketing systems automatically divert ads to web sites with related contents. Web publishers join ad networks and gain revenue by enabling ads...
Text recognition and correction for automated data collection in participatory sensing applications
ÖZARSLAN, SÜLEYMAN; Eren, Pekin Erhan (2013-04-26)
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. Demirel and R. G. Cinbiş, “Caption Generation on Scenes with Seen and Unseen Object Categories,”
IMAGE AND VISION COMPUTING
, vol. 0, pp. 0–0, 2022, Accessed: 00, 2022. [Online]. Available: https://hdl.handle.net/11511/98040.