Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
PROMPT INJECTION ATTACKS ON LARGE LANGUAGE MODELS: A SYSTEMATIC LITERATURE REVIEW
Download
e269321 Fatih Bayhan Term Project.pdf
Date
2025-6-11
Author
Bayhan, Fatih
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
180
views
50
downloads
Cite This
The use of Large Language Models has increased significantly after the release of ChatGPT in 2022. LLMs are used in various applications including chatbots and generative artificial intelligence applications. LLMs are integrated into these applications and perform their functionality at the backend. Like every new technology, LLMs or LLM-integrated applications are vulnerable to attacks. Among these attacks, prompt injection attacks have emerged in recent years due to the widespread use of LLM-integrated applications. These attacks aim to manipulate LLM outputs and cause the model to provide incorrect or harmful responses. This study focuses on examining existing research on prompt injection attacks. In this scope, the main aim of the study is to to classify attack types, determine current defense mechanisms, and provide guidelines to enhance LLM security. A systematic review study on 36 papers published between 2022 and 2025 was examined. The papers focusing on either prompt injection attacks or defense mechanisms were selected from the IEEE, WOS, and Scopus databases. A qualitative analysis was conducted to categorize attack methods, defense techniques, and highlight their contributions and future research directions to promote secure LLM development research.
Subject Keywords
Prompt injection
,
LLM
,
LLM security
URI
https://hdl.handle.net/11511/115034
Collections
Graduate School of Informatics, Term Project
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
F. Bayhan, “PROMPT INJECTION ATTACKS ON LARGE LANGUAGE MODELS: A SYSTEMATIC LITERATURE REVIEW,” M.S. - Master Of Science Without Thesis, Middle East Technical University, 2025.