Can AI code like a human: A Critical analysis of AI’s understanding in code generation

Download
2024-4-22
Akkuş, Sami
Large language models (LLMs) are so popular that they have revolutionized many software develop- ment areas, including code generation. This thesis investigates GPT3.5's ability to achieve human-like understanding in code generation. Its main purpose is to answer how adding more context, extracting explicit intention, and simulating multi-agent systems based on the distributed cognition and extended mind thesis affect the semantic understanding of LLMs in code generation. The success criteria of LLMs in code generation are evaluated based on the HumanEval dataset, which has real-world interview questions focusing on logical reasoning abilities, problem-solving, and simple math questions. Additionally, it searches to what extent LLMs can mimic human cognitive processes, evaluates this from the viewpoint of functionalism and the Chinese Room Argument, and tries to answer the question: Are we in the era of meeting the requirements of Strong AI? Furthermore, It demonstrates the limitations of LLMs because of their dependency on training data, inherent biases, and lack of environmental interaction, which restrict their originality and understand ing of intent. The findings show that while LLMs are very good at understanding syntax, their true understanding of semantics is still a significant challenge.
Citation Formats
S. Akkuş, “Can AI code like a human: A Critical analysis of AI’s understanding in code generation,” M.S. - Master of Science, Middle East Technical University, 2024.