Kilickaya, Mert
Erdem, Erkut
Erdem, Aykut
Çakıcı, Ruket
Automatic image captioning, the process cif producing a description for an image, is a very challenging problem which has only recently received interest from the computer vision and natural language processing communities. In this study, we present a novel data-driven image captioning strategy, which, for a given image, finds the most visually similar image in a large dataset of image-caption pairs and transfers its caption as the description of the input image. Our novelty lies in employing a recently' proposed high-level global image representation, named the meta-class descriptor, to better capture the semantic content of the input image for use in the retrieval process. Our experiments show that as compared to the baseline Im2Text model, our meta-class guided approach produces more accurate descriptions.


Comparison of whole scene image caption models
Görgülü, Tuğrul; Ulusoy, İlkay; Department of Electrical and Electronics Engineering (2021-2-10)
Image captioning is one of the most challenging processes in deep learning area which automatically describes the content of an image by using words and grammar. In recent years, studies are published constantly to improve the quality of this task. However, a detailed comparison of all possible approaches has not been done yet and we cannot know comparative performances of the proposed solutions in the literature. Thus, this thesis aims to redress this problem by making a comparative analysis among six diff...
Data-driven image captioning via salient region discovery
Kilickaya, Mert; Akkuş, Burak Kerim; Çakıcı, Ruket; Erdem, Aykut; Erdem, Erkut; İKİZLER CİNBİŞ, NAZLI (Institution of Engineering and Technology (IET), 2017-09-01)
n the past few years, automatically generating descriptions for images has attracted a lot of attention in computer vision and natural language processing research. Among the existing approaches, data-driven methods have been proven to be highly effective. These methods compare the given image against a large set of training images to determine a set of relevant images, then generate a description using the associated captions. In this study, the authors propose to integrate an object-based semantic image r...
Image Captioning with Unseen Objects
Berkan, Demirel; Cinbiş, Ramazan Gökberk; İkizler Cinbiş, Nazlı (2019-09-12)
Image caption generation is a long standing and challenging problem at the intersection of computer vision and natural language processing. A number of recently proposed approaches utilize a fully supervised object recognition model within the captioning approach. Such models, however, tend to generate sentences which only consist of objects predicted by the recognition models, excluding instances of the classes without labelled training examples. In this paper, we propose a new challenging scenario that ta...
Camera electronics and image enhancement software for infrared detector arrays
Küçükkömürler, Alper; Akın, Tayfun; Department of Environmental Engineering (2012)
This thesis aims to design and develop camera electronics and image enhancement software for infrared detector arrays. It first discusses the camera electronics suitable for infrared detector arrays, then it concentrates on image enhancement software that are implemented including defective pixel correction, contrast enhancement, noise reduction and pseudo coloring. After that, testing and results of the implemented algorithms were presented. Camera electronics and circuit operation frequency are selected c...
Ates, Tugrul K.; Alatan, Abdullah Aydın (2011-05-18)
Generating in-between images from multiple views of a scene is a crucial task for both computer vision and computer graphics fields. Photorealistic rendering, 3DTV and robot navigation are some of many applications which benefit from arbitrary view synthesis, if it is achieved in real-time. GPUs excel in achieving high computation power by processing arrays of data in parallel, which make them ideal for real-time computer vision applications. This paper proposes an arbitrary view rendering algorithm by usin...
Citation Formats
M. Kilickaya, E. Erdem, A. Erdem, N. İKİZLER CİNBİŞ, and R. Çakıcı, “DATA-DRIVEN IMAGE CAPTIONING WITH META-CLASS BASED RETRIEVAL,” 2014, Accessed: 00, 2020. [Online]. Available: