Hide/Show Apps

Improving reinforcement learning using distinctive clues of the environment

Demir, Alper
Effective decomposition and abstraction has been shown to improve the performance of Reinforcement Learning. An agent can use the clues from the environment to either partition the problem into sub-problems or get informed about its progress in a given task. In a fully observable environment such clues may come from subgoals while in a partially observable environment they may be provided by unique experiences. The contribution of this thesis is two fold; first improvements over automatic subgoal identification and option generation in fully observable environments is proposed, then an automatic landmark identification and an anchor based guiding mechanism in partially observable environments is introduced. Moreover, for both type of problems, the thesis proposes an overall framework that is shown to outperform baseline learning algorithms on several benchmark domains.