Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Reinforcement Learning Based Adaptive Blocklength and MCS for Optimizing Age Violation Probability
Download
index.pdf
Date
2023-01-01
Author
Ozkaya, Aysenur
Topbas, Ahsen
Ceran Arslan, Elif Tuğçe
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
25
views
36
downloads
Cite This
As a measure of the freshness of data, Age of Information (AoI) has become an essential performance metric in status update applications with stringent timeliness constraints. This study employs adaptive strategies to minimize the novel, information freshness-based performance metric age violation probability (AVP), the probability of the instantaneous age exceeding a predefined constraint, in short packet communications (SPC). AVP can be considered one of the key performance indicators (KPIs) in 5G Ultra-Reliable Low Latency Communications (URLLC), and it is expected to gain more importance in 6G technologies, especially in extreme URLLC (xURLLC). Two distinct approaches are considered: the first focuses on adaptively selecting the blocklengths with either imperfect or missing channel state information exploiting finite blocklength theory approximations. The second involves dynamically choosing the modulation and coding scheme (MCS) to minimize the AVP under stringent timeliness constraints and non-asymptotic information theory bounds. In the context of adaptive blocklength selection, state-aggregated value iteration, Q-learning algorithms, and finite blocklength theory approximations are leveraged to adjust blocklengths to achieve low age violation probabilities adaptively. The simulation results highlight the effectiveness of these algorithms in minimizing age violation probabilities compared to the fixed blocklengths under varying channel conditions. Additionally, constructing a deep reinforcement learning (DRL) framework, we propose a deep Q-network policy for the dynamic selection of the modulation and coding scheme among the available MCSs defined for URLLC systems. Through comprehensive simulations, we demonstrate the superiority of the proposed adaptive methods over traditional benchmark methods.
Subject Keywords
adaptive modulation and coding
,
age of information
,
Dynamic programming
,
dynamic programming
,
Encoding
,
Fading channels
,
finite blocklength
,
Information age
,
Measurement
,
Minimization
,
Modulation
,
Reinforcement learning
,
reinforcement learning
,
Throughput
,
Ultra reliable low latency communication
URI
https://hdl.handle.net/11511/106288
Journal
IEEE Access
DOI
https://doi.org/10.1109/access.2023.3326748
Collections
Department of Electrical and Electronics Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
A. Ozkaya, A. Topbas, and E. T. Ceran Arslan, “Reinforcement Learning Based Adaptive Blocklength and MCS for Optimizing Age Violation Probability,”
IEEE Access
, pp. 0–0, 2023, Accessed: 00, 2023. [Online]. Available: https://hdl.handle.net/11511/106288.