Collaborative robots must operate safely and efficiently in ever-changing unstructured environments, grasping and manipulating many different objects. Artificial vision has proved to be collaborative robots' ideal sensing technology and it is widely used for identifying the objects to manipulate and for detecting their optimal grasping. One of the main drawbacks of state of the art robotic vision systems is the long training needed for teaching the identification and optimal grasps of each object, which leads to a strong reduction of the robot productivity and overall operating flexibility. To overcome such limit, we propose an engineering method, based on deep learning techniques, for the detection of the robotic grasps of unknown objects in an unstructured environment, which should enable collaborative robots to autonomously generate grasping strategies without the need of training and programming. A novel loss function for the training of the grasp prediction network has been developed and proved to work well also with low resolution 2-D images, then allowing the use of a single, smaller and low cost camera, that can be better integrated in robotic end-effectors. Despite the availability of less information (resolution and depth) a 75% of accuracy has been achieved on the Cornell data set and it is shown that our implementation of the loss function does not suffer of the common problems reported in literature. The system has been implemented using the ROS framework and tested on a Baxter collaborative robot.

Deep Learning-Based Method for Vision-Guided Robotic Grasping of Unknown Objects / Bergamini Luca; Sposato Mario; Peruzzini Margherita; Vezzani Roberto; Pellicciari Marcello. - ELETTRONICO. - 7:(2018), pp. 281-290. (Intervento presentato al convegno 25th ISTE International Conference on Transdisciplinary Engineering tenutosi a Modena nel 3-6 July 2018) [10.3233/978-1-61499-898-3-281].

Deep Learning-Based Method for Vision-Guided Robotic Grasping of Unknown Objects

Peruzzini Margherita;
2018

Abstract

Collaborative robots must operate safely and efficiently in ever-changing unstructured environments, grasping and manipulating many different objects. Artificial vision has proved to be collaborative robots' ideal sensing technology and it is widely used for identifying the objects to manipulate and for detecting their optimal grasping. One of the main drawbacks of state of the art robotic vision systems is the long training needed for teaching the identification and optimal grasps of each object, which leads to a strong reduction of the robot productivity and overall operating flexibility. To overcome such limit, we propose an engineering method, based on deep learning techniques, for the detection of the robotic grasps of unknown objects in an unstructured environment, which should enable collaborative robots to autonomously generate grasping strategies without the need of training and programming. A novel loss function for the training of the grasp prediction network has been developed and proved to work well also with low resolution 2-D images, then allowing the use of a single, smaller and low cost camera, that can be better integrated in robotic end-effectors. Despite the availability of less information (resolution and depth) a 75% of accuracy has been achieved on the Cornell data set and it is shown that our implementation of the loss function does not suffer of the common problems reported in literature. The system has been implemented using the ROS framework and tested on a Baxter collaborative robot.
2018
Transdisciplinary Engineering Methods for Social Innovation of Industry 4.0
281
290
Deep Learning-Based Method for Vision-Guided Robotic Grasping of Unknown Objects / Bergamini Luca; Sposato Mario; Peruzzini Margherita; Vezzani Roberto; Pellicciari Marcello. - ELETTRONICO. - 7:(2018), pp. 281-290. (Intervento presentato al convegno 25th ISTE International Conference on Transdisciplinary Engineering tenutosi a Modena nel 3-6 July 2018) [10.3233/978-1-61499-898-3-281].
Bergamini Luca; Sposato Mario; Peruzzini Margherita; Vezzani Roberto; Pellicciari Marcello
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/952230
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact