Feedback

Faculté des Sciences appliquées
Faculté des Sciences appliquées
Mémoire

Modeling serial recall in working memory with recurrent neural networks

Télécharger
Stordeur, Lucas ULiège
Promoteur(s) : Sacré, Pierre ULiège
Date de soutenance : 30-jui-2025/1-jui-2025 • URL permanente : http://hdl.handle.net/2268.2/23282
Détails
Titre : Modeling serial recall in working memory with recurrent neural networks
Auteur : Stordeur, Lucas ULiège
Date de soutenance  : 30-jui-2025/1-jui-2025
Promoteur(s) : Sacré, Pierre ULiège
Membre(s) du jury : Majerus, Steve ULiège
Franci, Alessio ULiège
Drion, Guillaume ULiège
Langue : Anglais
Mots-clés : [en] Psychology
[en] Recurrent neural networks
[en] Computational modeling
Discipline(s) : Ingénierie, informatique & technologie > Ingénierie civile
Public cible : Chercheurs
Professionnels du domaine
Etudiants
URL complémentaire : https://github.com/LucasStordeur/TFE
Institution(s) : Université de Liège, Liège, Belgique
Diplôme : Master en ingénieur civil biomédical, à finalité spécialisée
Faculté : Mémoires de la Faculté des Sciences appliquées

Résumé

[en] In psychology, various tasks are designed to study cognitive functions such as language
processing, long-term memory, and working memory. Immediate serial recall is one such task,
where participants are presented with a sequence of items (e.g., words or letters) and must recall
them in the same order after the sequence is presented. This task provides valuable insights into
working memory mechanisms.

Computational modeling of psychological experiments helps improve our understanding of
working memory by replicating experimental data. Different models exist for serial recall, each
based on distinct mechanisms and assumptions that explain various observed behaviors. The
quality of these models is usually assessed by comparing their predictions with real experimental
data. Among these models, RNNs stand out due to their dynamical properties and strong
empirical support from neuroscience. An RNN consists of three main layers: an input layer, where
sequence items are presented; a hidden layer, where information is processed; and an output
layer, which generates predictions. A key feature of RNNs is their recurrent connections, allowing
past inputs to influence future predictions, making them well-suited for capturing temporal
relationships in sequences.

The first objective of my thesis is to construct an RNN capable of reproducing three key effects
observed in serial recall experiments: length-dependent performance (recall accuracy decreases
sigmoidally as sequence length increases), primacy and recency effects (better recall of the first
and last items in a sequence), transposition errors (recall mistakes typically involve swapping
items) and to explore the reproduction of many other effects characteristic of this task. To achieve this, I formally modeled the serial recall task and then trained the RNN. Unlike
standard deep learning approaches that optimize for maximum accuracy, our model is trained to
match the recall performance observed in human participants. This requires two key modifications: (1) training the network only until it reaches human-like recall accuracy and (2)
varying the dataset size to prevent overfitting to specific sequences. We will show that despite the difficulties that many encountered to construct such a model, recurrent neural networks are able to reproduce different empirical benchmarks.

The second objective is to explore the internal dynamics of the networks. Specifically, we investigate
whether modifying the cell dynamics to be more biologically plausible improves the model
alignment with experimental data. The hidden layer activation dynamics—determined by the
interaction between current inputs and past hidden states—are progressively refined. I begin with
a simple Elman network, then introduce more complex architectures: Gated Recurrent Unit (GRU)
cells, which incorporate a forgetting mechanism, Bistable Recurrent Cells (BRCs), which
introduce bistability properties. We will show that more complex and biologically plausible allows to achieve better results on the empirical benchmarks.

Finally, as this model has not yet been widely explored, we explored its behavior by investigating how the variability can be included in the curves, as they are a central characteristic of the human data, but is not well explored in model. In addition, the modelization of the inter-patient variability. In addition, saturation of model training will be investigated and will allow to create a more plausible model. All of these contributes to the better understanding of the model.

In conclusion, through these refinements, we aimed to bridge the gap between artificial and biological neural
networks, contributing to a deeper understanding of working memory.


Fichier(s)

Document(s)

File
Access Master_thesis_Stordeur_Lucas.pdf
Description:
Taille: 5.59 MB
Format: Adobe PDF
File
Access abstract_Stordeur_Lucas.pdf
Description:
Taille: 125.02 kB
Format: Adobe PDF

Auteur

  • Stordeur, Lucas ULiège Université de Liège > Master ing. civ. biom. fin. spéc.

Promoteur(s)

Membre(s) du jury

  • Majerus, Steve ULiège Université de Liège - ULiège > Département de Psychologie > Mémoire et langage
    ORBi Voir ses publications sur ORBi
  • Franci, Alessio ULiège Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Brain-Inspired Computing
    ORBi Voir ses publications sur ORBi
  • Drion, Guillaume ULiège Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
    ORBi Voir ses publications sur ORBi








Tous les documents disponibles sur MatheO sont protégés par le droit d'auteur et soumis aux règles habituelles de bon usage.
L'Université de Liège ne garantit pas la qualité scientifique de ces travaux d'étudiants ni l'exactitude de l'ensemble des informations qu'ils contiennent.