Efficient Image Pre-Training with Siamese Cropped Masked Autoencoders
Eymaël, Alexandre
Promoteur(s) : Van Droogenbroeck, Marc
Date de soutenance : 24-jui-2024/25-jui-2024 • URL permanente : http://hdl.handle.net/2268.2/20476
Détails
Titre : | Efficient Image Pre-Training with Siamese Cropped Masked Autoencoders |
Auteur : | Eymaël, Alexandre |
Date de soutenance : | 24-jui-2024/25-jui-2024 |
Promoteur(s) : | Van Droogenbroeck, Marc |
Membre(s) du jury : | Cioppa, Anthony
Geurts, Pierre |
Langue : | Anglais |
Nombre de pages : | 102 |
Mots-clés : | [en] Machine Learning [en] Deep Learning [en] Computer Vision [en] Self-Supervised Learning [en] Masked Autoencoders [en] Siamese Networks [en] Video Segmentation [en] Label Propagation |
Discipline(s) : | Ingénierie, informatique & technologie > Sciences informatiques |
Commentaire : | A paper related to this master's thesis, of which I am the first author, was accepted at the main conference of the European Conference on Computer Vision (ECCV) 2024. The paper is available on arXiv at the following link: https://arxiv.org/abs/2403.17823, and the code is available at https://github.com/alexandre-eymael/CropMAE. |
Centre(s) de recherche : | Telecommunications and Imaging Laboratory, Institut Montefiore, Université de Liège |
Public cible : | Chercheurs Professionnels du domaine Etudiants |
Institution(s) : | Université de Liège, Liège, Belgique |
Diplôme : | Master en science des données, à finalité spécialisée |
Faculté : | Mémoires de la Faculté des Sciences appliquées |
Résumé
[en] Self-supervised pre-training of image encoders has become omnipresent in the literature, especially since the introduction of Masked Autoencoders (MAE). To excel in propagation tasks such as video segmentation, current research focuses on learning object-centric representations from video motion. Notably, SiamMAE introduced a Siamese network that trains a shared-weight encoder from two video frames with a high asymmetric masking ratio (95%), achieving state-of-the-art performance in video object segmentation, human pose propagation, and semantic part propagation.
In this work, we propose CropMAE, an alternative to the Siamese pre-training method introduced by SiamMAE. Unlike SiamMAE that uses pairs of frames from videos, CropMAE exclusively considers pairs of cropped images sourced from the same still image, but cropped differently. This approach eliminates the need for video decoding, enabling training on still image datasets and significantly reducing pre-training time while maintaining competitive performance.
Our empirical results demonstrate that CropMAE can learn object-centric representations without relying on motion, unlike SiamMAE. This discovery indicates that with the appropriate pretext task, it is possible to acquire object-centric features without using videos or motion information. Furthermore, we show that the pretext task in CropMAE is more explicit and accelerates the learning process of object-centric representations compared to SiamMAE. Additionally, CropMAE achieves the highest masking ratio to date (98.5%), allowing image reconstruction with only two visible patches.
Fichier(s)
Document(s)
Description: Abstract
Taille: 47.38 kB
Format: Adobe PDF
Description: Thesis
Taille: 56.88 MB
Format: Adobe PDF
Citer ce mémoire
L'Université de Liège ne garantit pas la qualité scientifique de ces travaux d'étudiants ni l'exactitude de l'ensemble des informations qu'ils contiennent.