Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Compact Frequency Memory for Reinforcement Learning with Hidden States.
Date
2019-10-28
Author
Polat, Faruk
Cilden, Erkin
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
236
views
0
downloads
Cite This
Memory-based reinforcement learning approaches keep track of past experiences of the agent in environments with hidden states. This may require extensive use of memory that limits the practice of these methods in a real-life problem. The motivation behind this study is the observation that less frequent transitions provide more reliable information about the current state of the agent in ambiguous environments. In this work, a selective memory approach based on the frequencies of transitions is proposed to avoid keeping the transitions which are unrelated to the agent’s current state. Experiments show that the usage of a compact and selective memory may improve and speed up the learning process.
Subject Keywords
Reinforcement learning
,
Memory-based learning
,
Compact frequency memory
URI
https://hdl.handle.net/11511/57829
DOI
https://doi.org/10.1007/978-3-030-33792-6_26
Conference Name
PRIMA: International Conference on Principles and Practice of Multi-Agent Systems
Collections
Department of Computer Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Using Frequencies of Transitions to Improve Reinforcement Learning with Hidden States
Aydın, Hüseyin; Polat, Faruk; Çilden, Erkin; Department of Computer Engineering (2022-8)
Reinforcement learning problems with hidden states suffer from the ambiguity of the environment, since the ambiguity in the agent's perception may prevent the agent from estimating its current state correctly. Therefore, constructing a solution without using an external memory may be extremely difficult or even impossible sometimes. In an ambiguous environment, frequencies of the transitions can provide more reliable information and hence it may lead us to construct more efficient and effective memory inst...
Joint and interactive effects of trust and (inter) dependence on relational behaviors in long-term channel dyads
Yilmaz, C; Sezen, B; Özdemir, Özlem (Elsevier BV, 2005-04-01)
The authors investigate the effects of trust on the relational behaviors of firms in long-term channel dyads across different interdependence structures. Based on the long-term nature of the empirical setting, trust is posited to exert a positive effect on the emergence of relational behaviors in all interdependence conditions. This positive effect of trust is hypothesized to be stronger in highly and symmetrically interdependent channel dyads than in low-interdependence-type symmetric dyads. In addition, f...
Recursive Compositional Reinforcement Learning for Continuous Control Sürekli Kontrol Uygulamalari için Özyinelemeli Bileşimsel Pekiştirmeli Öǧrenme
Tanik, Guven Orkun; Ertekin Bolelli, Şeyda (2022-01-01)
Compositional and temporal abstraction is the key to improving learning and planning in reinforcement learning. Modern real-world control problems call for continuous control domains and robust, sample efficient and explainable control frameworks. We are presenting a framework for recursively composing control skills to solve compositional and progressively complex tasks. The framework promotes reuse of skills, and as a result quickly adaptable to new tasks. The decision-tree can be observed, providing insi...
Closed-form sample probing for training generative models in zero-shot learning
Çetin, Samet; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2022-2-10)
Generative modeling based approaches have led to significant advances in generalized zero-shot learning over the past few-years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class embeddings, such as attribute based class definitions. The final zero-shot learning model can then be obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefor...
Probability learning in normal and parkinson subjects: the effect of reward, context, and uncertainty
Erdeniz, Burak; Gökçay, Didem; Department of Cognitive Sciences (2007)
In this thesis, the learning of probabilistic relationships between stimulus-action pairs is investigated under the probability learning paradigm. The effect of reward is investigated in the first three experiments. Additionally, the effect of context and uncertainty is investigated in the second and third experiments, respectively. The fourth experiment is the replication of the second experiment with a group of Parkinson patients where the effect of dopamine medication on probability learning is studied. ...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
F. Polat and E. Cilden, “Compact Frequency Memory for Reinforcement Learning with Hidden States.,” presented at the PRIMA: International Conference on Principles and Practice of Multi-Agent Systems, Turin, Italy, 2019, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/57829.