Master Thesis : Implicit neural representations for robotic grasping
Gustin, Julien
Promoteur(s) : Louppe, Gilles
Date de soutenance : 26-jui-2023/27-jui-2023 • URL permanente : http://hdl.handle.net/2268.2/17624
Détails
Titre : | Master Thesis : Implicit neural representations for robotic grasping |
Titre traduit : | [fr] Représentations neurales implicites pour la préhension robotique |
Auteur : | Gustin, Julien |
Date de soutenance : | 26-jui-2023/27-jui-2023 |
Promoteur(s) : | Louppe, Gilles |
Membre(s) du jury : | Sacré, Pierre
Bruls, Olivier |
Langue : | Anglais |
Nombre de pages : | 94 |
Mots-clés : | [en] Deep learning [en] Machine learning [en] Grasping [en] Implicit Neural Representations [en] Prior [en] Bayesian inference [en] Simulation based inference [en] Robotics [fr] Robotique [fr] Préhension |
Discipline(s) : | Ingénierie, informatique & technologie > Sciences informatiques |
Commentaire : | A paper related to this master's thesis was published as part of a workshop entitled "Geometric Representations: The Roles of Screw Theory, Lie Algebra, & Geometric Algebra," held at ICRA 2023. Here is the link to the paper: https://arxiv.org/abs/2304.08805 |
Public cible : | Chercheurs Professionnels du domaine Etudiants Grand public Autre |
Institution(s) : | Université de Liège, Liège, Belgique |
Diplôme : | Master en science des données, à finalité spécialisée |
Faculté : | Mémoires de la Faculté des Sciences appliquées |
Résumé
[en] Robotic grasping is a fundamental skill in many robotic applications. While most grasping methods excel in constrained tasks within structured environments. When operating in more complex and uncertain scenarios, handling uncertainty becomes essential. Bayesian frameworks provide a means to address this uncertainty but require prior knowledge about the grasping pose. However, previous research has demonstrated that using a uniform prior over the workspace is highly inefficient.
In this work, we propose a novel approach that exploits implicit neural representations to construct scene-dependent priors. This enables the application of powerful simulation-based inference algorithms to determine plausible and successful grasp poses in unstructured environments.
We demonstrate the significant improvements achieved by incorporating this informative prior. Specifically, our model achieves an impressive success rate of 97% in grasping a single object, surpassing the performance of the previous model. Additionally, we reduce acquisition time by 60% by capturing only a partial view of the scene and training an implicit neural network to reconstruct the complete scene. Furthermore, in the more complex scenario of multi-object grasping, our model achieves a success rate of 91.37% in simulation and 95.6% in real-world scenarios, comparable to benchmark models. These results demonstrate the effectiveness of our approach and its impressive sim2real transfer capabilities.
We also provide valuable explainability by examining the predicted posterior distribution. This analysis allows for a better understanding of the uncertainty associated with the estimation of the grasping pose, enhancing the transparency of the system's decision-making process.
Fichier(s)
Document(s)
Citer ce mémoire
L'Université de Liège ne garantit pas la qualité scientifique de ces travaux d'étudiants ni l'exactitude de l'ensemble des informations qu'ils contiennent.