Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
ADAPTIVE PARAMETER OPTIMIZATION FOR REINFORCEMENT LEARNING-BASED SPARK JOB SCHEDULING
Download
BurakSen_Thesis_2235646.pdf
ceng-b.sen.pdf
Date
2025-8-28
Author
Şen, Burak
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
1032
views
0
downloads
Cite This
This study presents an investigation on adaptive parameter optimization techniques for Reinforcement Learning-based Apache Spark job scheduling. Traditional Reinforcement Learning-based scheduling approaches suffer from the limitations of fixed hyperparameter configurations, requiring extensive manual tuning and often failing to adapt optimally to diverse workload characteristics. The research develops and evaluates adaptive mechanisms that enhance Proximal Policy Optimization (PPO) effectiveness through dynamic parameter adjustment. Four adaptive approaches are proposed: adaptive clipping that dynamically adjusts policy update constraints based on Kullback-Leibler divergence feedback, adaptive learning rate mechanisms that modulate optimization step sizes according to training progress, a combined approach leveraging both techniques simultaneously, and enhanced Generalized Advantage Estimation for improved value function approximation. The experimental evaluation is conducted within a comprehensive discrete-event simulator that accurately models Apache Spark execution semantics. The proposed mechanisms are tested using Transaction Processing Performance Council - High Performance (TPC-H) workloads across multiple random seeds to ensure statistical rigor and reproducibility. The adaptive mechanisms are formulated under the assumptions of policy gradient optimization theory and incorporate feedback-based parameter adjustment strategies. Sample problems are considered, and the solutions obtained for adaptive mechanisms are compared with those achieved by baseline implementation. The results reveal that, with proper adaptive parameter adjustment, the proposed mechanisms may become advantageous over traditional fixed-parameter approaches in terms of convergence stability, exploration effectiveness, and optimization quality.
Subject Keywords
Reinforcement Learning
,
Spark Scheduling
,
Graph Neural Networks
,
Adaptive Optimization
,
Job Scheduling
URI
https://hdl.handle.net/11511/116039
Collections
Graduate School of Natural and Applied Sciences, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. Şen, “ADAPTIVE PARAMETER OPTIMIZATION FOR REINFORCEMENT LEARNING-BASED SPARK JOB SCHEDULING,” M.S. - Master of Science, Middle East Technical University, 2025.