Master thesis : Sparse hypernetworks for multitasking
Cubélier, François
Promoteur(s) : Geurts, Pierre
Date de soutenance : 27-jui-2022/28-jui-2022 • URL permanente : http://hdl.handle.net/2268.2/14574
Détails
Titre : | Master thesis : Sparse hypernetworks for multitasking |
Auteur : | Cubélier, François |
Date de soutenance : | 27-jui-2022/28-jui-2022 |
Promoteur(s) : | Geurts, Pierre |
Membre(s) du jury : | Wehenkel, Louis
Louveaux, Quentin |
Langue : | Anglais |
Nombre de pages : | 73 |
Mots-clés : | [en] deep learning [en] hypernetworks [en] multitasking [en] meta-models |
Discipline(s) : | Ingénierie, informatique & technologie > Sciences informatiques |
Public cible : | Chercheurs Professionnels du domaine Etudiants |
URL complémentaire : | https://github.com/francoisCub/multitasking-hnet |
Institution(s) : | Université de Liège, Liège, Belgique |
Diplôme : | Master en ingénieur civil en informatique, à finalité spécialisée en "intelligent systems" |
Faculté : | Mémoires de la Faculté des Sciences appliquées |
Résumé
[en] Machine learning researchers have always been interested in creating less narrow artificial intelligence. Meta-models, i.e. models capable of producing other models, could potentially be a key ingredient for building new highly multitasking capable models. Hypernetworks, which are neural networks that produce the parameters of other neural networks, can be used as meta-models. However, due to the large number of parameters in neural networks nowadays, it is not trivial to build hypernetworks with the large output size required to produce all the parameters of another neural network. Current solutions, like chunked hypernetworks, which split the target parameter space into parts and reuse the same model to produce each part, achieve good results in practice and are scalable independently of the maximal size of the layers in the target model. However, they seem unsatisfactory because they arbitrarily split the target model parameters into chunks. In this work, we propose a new scalable architecture for building hypernetworks, which consists in a sparse MLP with hidden layers of exponentially growing size. After testing different variations of this architecture, we compare it with chunked hypernetworks on multitasking computer vision benchmarks. We show that they can match the performance of chunked hypernetworks, even though they were slightly behind on more complex problems. We also show that linear sparse hypernetworks outperformed their non-linear version and chunked hypernetworks for inferring new models for new tasks with a pretrained task-conditioned hypernetwork. This is may indicate that linear sparse hypernetworks have better generalization properties than more complex hypernetworks. In addition to proposing this sparse architecture and as a preamble of this work, we also review the literature on hypernetworks and propose a typology of hypernetworks. Even though the results obtained are promising, there are still many ways to improve sparse hypernetworks and, more generally, hypernetworks that can be explored in future research.
Fichier(s)
Document(s)
Annexe(s)
Citer ce mémoire
L'Université de Liège ne garantit pas la qualité scientifique de ces travaux d'étudiants ni l'exactitude de l'ensemble des informations qu'ils contiennent.