Memory Efficient Factored Abstraction for Reinforcement Learning

Sahin, Coskun
Cilden, Erkin
Polat, Faruk
Classical reinforcement learning techniques are often inadequate for problems with large state-space due to curse of dimensionality. If the states can be represented as a set of variables, it is possible to model the environment more compactly. Automatic detection and use of temporal abstractions during learning was proven to be effective to increase learning speed. In this paper, we propose a factored automatic temporal abstraction method based on an existing temporal abstraction strategy, namely extended sequence tree algorithm, by taking care of state differences via state variable changes. The proposed method has been shown to provide significant memory gain on selected benchmark problems.
Citation Formats
C. Sahin, E. Cilden, and F. Polat, “Memory Efficient Factored Abstraction for Reinforcement Learning,” 2015, Accessed: 00, 2020. [Online]. Available: