Grasping objects in cluttered environments
Nicolay, Pierre
Promotor(s) : Boigelot, Bernard
Date of defense : 26-Jun-2019/27-Jun-2019 • Permalink : http://hdl.handle.net/2268.2/6750
Details
Title : | Grasping objects in cluttered environments |
Translated title : | [fr] Saisir des objets dans un environnement encombré |
Author : | Nicolay, Pierre |
Date of defense : | 26-Jun-2019/27-Jun-2019 |
Advisor(s) : | Boigelot, Bernard |
Committee's member(s) : | Cornélusse, Bertrand
Detry, Renaud Wehenkel, Louis Geurts, Pierre Van Droogenbroeck, Marc |
Language : | English |
Number of pages : | 85 |
Keywords : | [en] grasping [en] machine learning [en] computer vision [en] robotic |
Discipline(s) : | Engineering, computing & technology > Computer science |
Funders : | Army Research Laboratory |
Research unit : | NASA/Jet Propulsion Laboratory, California Institute of Technology |
Name of the research project : | Robotics collaborative technology alliance |
Target public : | Researchers Professionals of domain |
Institution(s) : | Université de Liège, Liège, Belgique |
Degree: | Master en ingénieur civil en informatique, à finalité spécialisée en "intelligent systems" |
Faculty: | Master thesis of the Faculté des Sciences appliquées |
Abstract
[en] One of the core challenges in todays robotic manipulation is the grasping problem. It consists in designing a mathematical model of the environment so that the robot is able to compute a hand and finger trajectories that yields a grasping configuration. In order for autonomous robots to interact with their environment, we need them to be able to grasp objects by understanding the environment surrounding them. We will thus present in this work a new solution for the grasping problem in cluttered environments. More specifically, we consider in this work the problem of detecting, planning and executing a grasp in a cluttered complex and uncontrolled environment. We consider here only a single RGB-D camera as input to our model. To solve this problem we use a hybrid model composed of a context segmentation and geometric model. The segmentation model, based on deep learning, captures information about where, and the geometric model, using a dictionary of grasping prototypes alongside a searching algorithm, tells us how to grasp. The segmentation network achieve a jiccard index value of 76.61\% on the training set and 56.93\% on the validation set. We achieve an overall grasp succes rate of 2 out of 3 grasps.
File(s)
Document(s)
Description: -
Size: 36.82 MB
Format: Adobe PDF
Description: -
Size: 53.44 kB
Format: Adobe PDF
Annexe(s)
Description: -
Size: 106.94 kB
Format: image/png
Description: -
Size: 315.98 kB
Format: image/png
Description: -
Size: 71.06 kB
Format: image/png
Description: -
Size: 89.1 kB
Format: image/png
Description: -
Size: 170.38 kB
Format: image/png
Cite this master thesis
The University of Liège does not guarantee the scientific quality of these students' works or the accuracy of all the information they contain.