Feedback

Faculté des Sciences appliquées
Faculté des Sciences appliquées
MASTER THESIS

Modeling serial recall in working memory with recurrent neural networks

Download
Stordeur, Lucas ULiège
Promotor(s) : Sacré, Pierre ULiège
Date of defense : 30-Jun-2025/1-Jul-2025 • Permalink : http://hdl.handle.net/2268.2/23282
Details
Title : Modeling serial recall in working memory with recurrent neural networks
Author : Stordeur, Lucas ULiège
Date of defense  : 30-Jun-2025/1-Jul-2025
Advisor(s) : Sacré, Pierre ULiège
Committee's member(s) : Majerus, Steve ULiège
Franci, Alessio ULiège
Drion, Guillaume ULiège
Language : English
Keywords : [en] Psychology
[en] Recurrent neural networks
[en] Computational modeling
Discipline(s) : Engineering, computing & technology > Civil engineering
Target public : Researchers
Professionals of domain
Student
Complementary URL : https://github.com/LucasStordeur/TFE
Institution(s) : Université de Liège, Liège, Belgique
Degree: Master en ingénieur civil biomédical, à finalité spécialisée
Faculty: Master thesis of the Faculté des Sciences appliquées

Abstract

[en] In psychology, various tasks are designed to study cognitive functions such as language
processing, long-term memory, and working memory. Immediate serial recall is one such task,
where participants are presented with a sequence of items (e.g., words or letters) and must recall
them in the same order after the sequence is presented. This task provides valuable insights into
working memory mechanisms.

Computational modeling of psychological experiments helps improve our understanding of
working memory by replicating experimental data. Different models exist for serial recall, each
based on distinct mechanisms and assumptions that explain various observed behaviors. The
quality of these models is usually assessed by comparing their predictions with real experimental
data. Among these models, RNNs stand out due to their dynamical properties and strong
empirical support from neuroscience. An RNN consists of three main layers: an input layer, where
sequence items are presented; a hidden layer, where information is processed; and an output
layer, which generates predictions. A key feature of RNNs is their recurrent connections, allowing
past inputs to influence future predictions, making them well-suited for capturing temporal
relationships in sequences.

The first objective of my thesis is to construct an RNN capable of reproducing three key effects
observed in serial recall experiments: length-dependent performance (recall accuracy decreases
sigmoidally as sequence length increases), primacy and recency effects (better recall of the first
and last items in a sequence), transposition errors (recall mistakes typically involve swapping
items) and to explore the reproduction of many other effects characteristic of this task. To achieve this, I formally modeled the serial recall task and then trained the RNN. Unlike
standard deep learning approaches that optimize for maximum accuracy, our model is trained to
match the recall performance observed in human participants. This requires two key modifications: (1) training the network only until it reaches human-like recall accuracy and (2)
varying the dataset size to prevent overfitting to specific sequences. We will show that despite the difficulties that many encountered to construct such a model, recurrent neural networks are able to reproduce different empirical benchmarks.

The second objective is to explore the internal dynamics of the networks. Specifically, we investigate
whether modifying the cell dynamics to be more biologically plausible improves the model
alignment with experimental data. The hidden layer activation dynamics—determined by the
interaction between current inputs and past hidden states—are progressively refined. I begin with
a simple Elman network, then introduce more complex architectures: Gated Recurrent Unit (GRU)
cells, which incorporate a forgetting mechanism, Bistable Recurrent Cells (BRCs), which
introduce bistability properties. We will show that more complex and biologically plausible allows to achieve better results on the empirical benchmarks.

Finally, as this model has not yet been widely explored, we explored its behavior by investigating how the variability can be included in the curves, as they are a central characteristic of the human data, but is not well explored in model. In addition, the modelization of the inter-patient variability. In addition, saturation of model training will be investigated and will allow to create a more plausible model. All of these contributes to the better understanding of the model.

In conclusion, through these refinements, we aimed to bridge the gap between artificial and biological neural
networks, contributing to a deeper understanding of working memory.


File(s)

Document(s)

File
Access Master_thesis_Stordeur_Lucas.pdf
Description:
Size: 5.59 MB
Format: Adobe PDF
File
Access abstract_Stordeur_Lucas.pdf
Description:
Size: 125.02 kB
Format: Adobe PDF

Author

  • Stordeur, Lucas ULiège Université de Liège > Master ing. civ. biom. fin. spéc.

Promotor(s)

Committee's member(s)

  • Majerus, Steve ULiège Université de Liège - ULiège > Département de Psychologie > Mémoire et langage
    ORBi View his publications on ORBi
  • Franci, Alessio ULiège Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Brain-Inspired Computing
    ORBi View his publications on ORBi
  • Drion, Guillaume ULiège Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
    ORBi View his publications on ORBi








All documents available on MatheO are protected by copyright and subject to the usual rules for fair use.
The University of Liège does not guarantee the scientific quality of these students' works or the accuracy of all the information they contain.