Bistable Recurrent Cells and Belief Filtering for Q-learning in Partially Observable Markov Decision Processes
Lambrechts, Gaspard
Promoteur(s) : Ernst, Damien
Date de soutenance : 24-jui-2021/25-jui-2021 • URL permanente : http://hdl.handle.net/2268.2/11474
Détails
Titre : | Bistable Recurrent Cells and Belief Filtering for Q-learning in Partially Observable Markov Decision Processes |
Titre traduit : | [fr] Cellules récurrentes bistables et filtrage de la distribution sur les états pour le Q-learning dans les processus de décisions markoviens partiellement observables |
Auteur : | Lambrechts, Gaspard |
Date de soutenance : | 24-jui-2021/25-jui-2021 |
Promoteur(s) : | Ernst, Damien |
Membre(s) du jury : | Louppe, Gilles
Drion, Guillaume Bolland, Adrien |
Langue : | Anglais |
Nombre de pages : | 74 |
Mots-clés : | [en] Reinforcement Learning [en] Belief Filtering [en] POMDP [en] Deep Recurrent Q-Network [en] DRQN [en] Online Fitted Q-Iteration [en] OFQI [en] RNN [en] Bistable Recurrent Cell [en] BRC [en] Q-Learning [en] Markov Decision Process [en] MDP [en] Bistability [en] Target Network [en] Partially Observable Markov Decision Process [en] Recurrent Neural Network [en] Mutual Information [en] RL |
Discipline(s) : | Ingénierie, informatique & technologie > Sciences informatiques |
Public cible : | Chercheurs Professionnels du domaine Etudiants |
Institution(s) : | Université de Liège, Liège, Belgique |
Diplôme : | Master : ingénieur civil en science des données, à finalité spécialisée |
Faculté : | Mémoires de la Faculté des Sciences appliquées |
Résumé
[en] In this master's thesis, reinforcement learning (RL) methods are used to learn (near-)optimal policies to act in several Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). More precisely, Q-learning and recurrent Q-learning techniques are used. Some of the considered POMDPs require a high-memorisation ability in order to achieve optimal decision making. In POMDPs, RL techniques usually rely on approximating functions that take as input sequences of observations with variable length. Recurrent neural networks (RNNs) are thus a clever choice of such approximators. This work is based on the recently introduced bistable recurrent cells, which have been empirically shown to provide a significantly better long term memory than standard cells, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). These cells are named the bistable recurrent cell (BRC) and the recurrently neuromodulated BRC (nBRC). First, by importing these cells for the first time in the RL setting, it is empirically shown that they also provide a significant advantage in memory-demanding POMDPs, in comparison to LSTM and GRU. Second, the ability of the RNN to represent a belief distribution over the states of the POMDP is studied. It is achieved by evaluating the mutual information between the hidden states of the RNN and the belief filtered on the successive observations. This analysis is thus strongly anchored in the theory of information and the theory of optimal control for POMDPs. Third, as a complement to this research project, a new target update is proposed for Q-learning algorithms with target networks, for both reactive and recurrent policies. This new update speeds up learning, especially in environments with sparse rewards.
Fichier(s)
Document(s)
Citer ce mémoire
L'Université de Liège ne garantit pas la qualité scientifique de ces travaux d'étudiants ni l'exactitude de l'ensemble des informations qu'ils contiennent.