Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Semantic deep learning and adaptive clustering for handling multimodal multimedia information retrieval
Date
2024-01-01
Author
Sattari, Saeid
Yazıcı, Adnan
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
41
views
0
downloads
Cite This
Multimedia data encompasses various modalities, including audio, visual, and text, necessitating the development of robust retrieval methods capable of harnessing these modalities to extract and retrieve semantic information from multimedia sources. This paper presents a highly scalable and versatile end-to-end framework for multimodal multimedia information retrieval. The core strength of this system lies in its capacity to learn semantic contexts within individual modalities and across different modalities, achieved through the utilization of deep neural models. These models are trained using combinations of queries and relevant shots obtained from query logs. One of the distinguishing features of this framework is its ability to create shot templates, representing videos that have not been encountered previously. To enhance retrieval performance, the system employs clustering techniques to retrieve shots similar to these templates. To address the inherent uncertainty in multimodal concepts, an improved variant of fuzzy clustering is applied. Additionally, a fusion method incorporating an OWA operator is introduced. This method employs various measures to aggregate ranked lists produced by multiple retrieval systems. The proposed approach leverages parallel processing and transfer learning to extract features from three distinct modalities, ensuring the adaptability and scalability of the framework. To assess its effectiveness and efficiency, the system is rigorously evaluated through experiments conducted on six widely recognized multimodal datasets. Remarkably, our approach outperforms previous studies in the literature on four of these datasets, achieving performance improvements ranging from 1.5% to 10.1% over the best reported results in those studies. The experimental findings, substantiated by statistical tests, conclusively establish the effectiveness of the proposed approach in the field of multimodal multimedia information retrieval.
Subject Keywords
Adaptive fuzzy clustering
,
Deep semantic learning
,
Information fusion
,
Multimodal multimedia retrieval
,
Ranked lists fusion
URI
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85194492441&origin=inward
https://hdl.handle.net/11511/109897
Journal
Multimedia Tools and Applications
DOI
https://doi.org/10.1007/s11042-024-19312-7
Collections
Department of Computer Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Sattari and A. Yazıcı, “Semantic deep learning and adaptive clustering for handling multimodal multimedia information retrieval,”
Multimedia Tools and Applications
, pp. 0–0, 2024, Accessed: 00, 2024. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85194492441&origin=inward.