Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Can AI code like a human: A Critical analysis of AI’s understanding in code generation
Download
index.pdf
Date
2024-4-22
Author
Akkuş, Sami
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
490
views
191
downloads
Cite This
Large language models (LLMs) are so popular that they have revolutionized many software develop- ment areas, including code generation. This thesis investigates GPT3.5's ability to achieve human-like understanding in code generation. Its main purpose is to answer how adding more context, extracting explicit intention, and simulating multi-agent systems based on the distributed cognition and extended mind thesis affect the semantic understanding of LLMs in code generation. The success criteria of LLMs in code generation are evaluated based on the HumanEval dataset, which has real-world interview questions focusing on logical reasoning abilities, problem-solving, and simple math questions. Additionally, it searches to what extent LLMs can mimic human cognitive processes, evaluates this from the viewpoint of functionalism and the Chinese Room Argument, and tries to answer the question: Are we in the era of meeting the requirements of Strong AI? Furthermore, It demonstrates the limitations of LLMs because of their dependency on training data, inherent biases, and lack of environmental interaction, which restrict their originality and understand ing of intent. The findings show that while LLMs are very good at understanding syntax, their true understanding of semantics is still a significant challenge.
Subject Keywords
AI generativity
,
distributed cognition
,
code generation
,
originality
,
bias
URI
https://hdl.handle.net/11511/109525
Collections
Graduate School of Informatics, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Akkuş, “Can AI code like a human: A Critical analysis of AI’s understanding in code generation,” M.S. - Master of Science, Middle East Technical University, 2024.