Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Comparative Analysis Of The Commonly Used Code Generated By Large Language Models For MISRA C Compliance
Download
Comparative Analysis Of The Commonly Used Code Generated By Large Language Models For MISRA C Compliance.pdf
Date
2026-1
Author
Öztop, Umut
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
66
views
22
downloads
Cite This
This report presents whether Large Language Models (LLMs) like ChatGPT and Gemini can generate safety-critical C code compliant with MISRA C:2012. We tasked six models with implementing CRC16 and COBS algorithms, verifying the output with PC-lint Plus and functional tests. Our results show a clear drop in compliance as algorithmic complexity increases. While models excelled at the simple CRC16 task (Gemini averaging 0.3 violations), the memory-intensive COBS task caused widespread safety failures, especially regarding single-point exit rules. We observed a distinct "compliance paradox": Claude produced the fewest violations but frequently generated broken code, whereas ChatGPT achieved 100% functional accuracy but penalized its safety score by adding unrequested complexity. Ultimately, while LLMs show promise as prototyping assistants, they cannot yet autonomously generate certification-ready embedded software.
Subject Keywords
Large language models (LLMs)
,
MISRA C:2012
,
Static analysis
,
Safety-critical systems
,
Embedded software engineering
URI
https://hdl.handle.net/11511/118338
Collections
Graduate School of Informatics, Term Project
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
U. Öztop, “Comparative Analysis Of The Commonly Used Code Generated By Large Language Models For MISRA C Compliance,” M.S. - Master Of Science Without Thesis, Middle East Technical University, 2026.