Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Could we create a training set for image captioning using automatic translation? Görüntü Altyazilama için Otomatik Tercüeyle Eǧitim Kümesi Oluşturulabilir mi?
Date
2017-05-18
Author
Samet, Nermin
Duygulu, Pinar
Akbaş, Emre
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
223
views
0
downloads
Cite This
Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.
Subject Keywords
Image captioning
,
Computer vision
,
Machine translation
URI
https://hdl.handle.net/11511/43292
DOI
https://doi.org/10.1109/siu.2017.7960638
Collections
Department of Computer Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Analysis of dataset, object tag, and object attribute components in novel object captioning
Şahin, Enes Muvahhid; Akar, Gözde; Department of Electrical and Electronics Engineering (2022-7)
Image captioning is a popular yet challenging task which lies at the intersection of Computer Vision and Natural Language Processing. A specific branch of image captioning called Novel Object Captioning draw attention in recent years. Different from general image captioning, Novel Object Captioning focuses on describing images with novel objects which are not seen during training. Recently, numerous image captioning approaches are proposed in order to increase quality of the generated captions for both gene...
Comparison of whole scene image caption models
Görgülü, Tuğrul; Ulusoy, İlkay; Department of Electrical and Electronics Engineering (2021-2-10)
Image captioning is one of the most challenging processes in deep learning area which automatically describes the content of an image by using words and grammar. In recent years, studies are published constantly to improve the quality of this task. However, a detailed comparison of all possible approaches has not been done yet and we cannot know comparative performances of the proposed solutions in the literature. Thus, this thesis aims to redress this problem by making a comparative analysis among six diff...
Motion estimation using complex discrete wavelet transform
Sarı, Hüseyin; Severcan, Mete; Department of Electrical and Electronics Engineering (2003)
The estimation of optical flow has become a vital research field in image sequence analysis especially in past two decades, which found applications in many fields such as stereo optics, video compression, robotics and computer vision. In this thesis, the complex wavelet based algorithm for the estimation of optical flow developed by Magarey and Kingsbury is implemented and investigated. The algorithm is based on a complex version of the discrete wavelet transform (CDWT), which analyzes an image through blo...
Robust watermarking of images
Balcı, Salih Eren; Akar, Gözde; Department of Electrical and Electronics Engineering (2003)
Digital image watermarking has gained a great interest in the last decade among researchers. Having such a great community which provide a continuously growing list of proposed algorithms, it is rapidly finding solutions to its problems. However, still we are far away from being successful. Therefore, more and more people are entering the field to make the watermarking idea useful and reliable for digital world. Of these various watermarking algorithms, some outperform others in terms of basic watermarking ...
Automatic cartoon generation by learning the style of an artist
Kuruoğlu, Betül; Yarman Vural, Fatoş Tunay; Department of Computer Engineering (2012)
In this study, we suggest an algorithm for generating cartoons from face images automatically. The suggested method learns drawing style of an artist and applies this style to the face images in a database to create cartoons. The training data consists of a set of face images and corresponding cartoons, drawn by the same artist. Initially, a set of control points are labeled and indexed to characterize the face in the training data set for both images and corresponding caricatures. Then, their features are ...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
N. Samet, P. Duygulu, and E. Akbaş, “Could we create a training set for image captioning using automatic translation? Görüntü Altyazilama için Otomatik Tercüeyle Eǧitim Kümesi Oluşturulabilir mi?,” 2017, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/43292.