Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
An LLM-Driven Framework For Automatic Curriculum Learning To Enhance Generalization In Open-Ended Reinforcement Learning
Download
olcaorakci-thesis.pdf
Olca Orakcı Tez Dökümanları.pdf
Date
2026-1-14
Author
Orakcı, Olca
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
66
views
0
downloads
Cite This
Reinforcement Learning research has long sought to achieve generalization. The benchmark environments gradually evolved over the years to achieve this demanding task. As environments became more complex and generalization-oriented, this introduced new research fields, such as Open-Ended Learning (OEL). OEL is the study of learning settings in which a large set of skills exists, some of which may or may not be required for each situation represented in the environment. These complex environments necessitated policies that did not memorize environments but learned generalized skills. With the development of methods like Automated Curriculum Learning (ACL), this challenging task became more achievable. The focus of this thesis is to introduce a standardized way to use Large Language Models (LLMs) as an ACL method, called Adaptive Reasoning Curriculum (ARC), in a novel, high-performing, sample-efficient, and reproducible manner, and to demonstrate the method’s effectiveness in a state-of-the-art massively multi-agent OEL environment, Neural MMO 2. The results of the experiments show that the ARC framework outperforms both an expert curriculum and several ACL methods across average return and sample efficiency. With auxiliary helper tools, a dashboard, and a validation tool, ARC framework aims to make OEL research with LLMs available to all researchers and enthusiasts through open source, standardized, and shareable adapters, orchestrators, and experiment files.
Subject Keywords
Generalization
,
Reinforcement Learning
,
Open-Ended Learning
,
Automated Curriculum Learning
,
Large Language Models
URI
https://hdl.handle.net/11511/118414
Collections
Graduate School of Informatics, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. Orakcı, “An LLM-Driven Framework For Automatic Curriculum Learning To Enhance Generalization In Open-Ended Reinforcement Learning,” M.S. - Master of Science, Middle East Technical University, 2026.