Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
METADATA-GUIDED GENERATION OF DOMAIN-SPECIFIC PEER REVIEWS WITH LLMS
Download
ezgicavus_updated_thesis.pdf
Ezgi Çavuş-imza beyan.pdf
Date
2025-8-21
Author
Çavuş, Ezgi
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
3693
views
0
downloads
Cite This
Large Language Models (LLMs) are increasingly used to support academic evaluation, yet they often struggle to capture the domain-specific criteria essential for rigorous peer review. While existing reviewer guidelines and research checklists address general concerns such as reproducibility, ethical considerations, acknowledgment of limitations, and societal impact, they often neglect discipline-specific criteria that vary with a paper’s methodology, or evalua- tion strategy. This thesis introduces a framework that automatically extracts domain-specific review questions from past evaluations in the OpenReview system and aligns them with new submissions using structured metadata derived from paper content, including methodology, datasets, and evaluation metrics. The results show that the proposed framework improves the specificity and contextual relevance of generated reviews. Compared to baseline models, it allows more precise control over the content and focus of generated reviews by explicitly grounding them in metadata and past evaluation patterns, even when overall performance met- rics are similar. The system improves traceability, reduces hallucinated content, and increases the interpretability of automated peer review.
Subject Keywords
domain specific peer review, natural language processing, question generation, metadata extraction
URI
https://hdl.handle.net/11511/115581
Collections
Graduate School of Informatics, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Çavuş, “METADATA-GUIDED GENERATION OF DOMAIN-SPECIFIC PEER REVIEWS WITH LLMS,” M.S. - Master of Science, Middle East Technical University, 2025.