Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
KNOWLEDGE GRAPH AUGMENTED MULTI-HOP QUESTION ANSWERING USING LARGE LANGUAGE MODELS
Download
knowledge-graph-augmented-multi-hop-question-answering-using-large-language-models.pdf
Date
2024-8
Author
Sağlam, Barış Deniz
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
208
views
895
downloads
Cite This
This thesis explores the use of small to medium-sized large language models (LLMs) for multi-hop question answering. As computational resources and latency present significant constraints in real-world applications, smaller language models (LLMs) are often utilized. However, these smaller models generally lack the extensive parametric knowledge and advanced reasoning capabilities possessed by their larger counterparts, such as GPT-4. This research investigates various augmentation strategies, notably the use of knowledge graphs, which provide a structured representation of facts and relationships to compensate for these limitations. This study investigates: whether knowledge graphs improve multi-hop question-answering capabilities of LLMs, the impact of integrating entity-relation triplets with textual content, and whether adaptation methods such as supervised fine-tuning or reinforcement learning with task-specific feedback improve the joint entity-relation extraction performance. The study introduces a novel prompting technique, Connect-the-Entities (CTE), which facilitates the extraction of relevant entity-relations before answering questions, thereby improving performance on the MuSiQue-Ans dataset with reduced computational demand. Additionally, the use of a pre-built knowledge graph as an external knowledge source demonstrates comparable results to baseline systems. Overall, this thesis contributes to the field by demonstrating how smaller LLMs can achieve enhanced question-answering performance through structured knowledge integration and advanced prompting techniques.
Subject Keywords
multi-hop question answering
,
large language model
,
knowledge graph
,
question decomposition
,
prompt engineering
URI
https://hdl.handle.net/11511/111317
Collections
Graduate School of Informatics, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. D. Sağlam, “KNOWLEDGE GRAPH AUGMENTED MULTI-HOP QUESTION ANSWERING USING LARGE LANGUAGE MODELS,” M.S. - Master of Science, Middle East Technical University, 2024.