Availability of a few, large-size, annotated datasets, like ImageNet, Pascal VOC and COCO, has lead deep learning to revolutionize computer vision research by achieving astonishing results in several vision tasks. We argue that new tools to facilitate generation of annotated datasets may help spreading data-driven AI throughout applications and domains. In this work we propose Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game. Our tool allows for semantically labeling large scale environments very expeditiously, whatever the nature of the 3D data at hand (e.g. point clouds, mesh). Furthermore, Shooting Labels efficiently integrates multiusers annotations to improve the labeling accuracy automatically and compute a label uncertainty map. Besides, within our framework the 3D annotations can be projected into 2D images, thereby speeding up also a notoriously slow and expensive task such as pixel-wise semantic labeling. We demonstrate the accuracy and efficiency of our tool in two different scenarios: an indoor workspace provided by Matterport3D and a large-scale outdoor environment reconstructed from 1000+ KITTI images.

Pierluigi Zama Ramirez, Claudio Paternesi, Luigi Di Lella, Daniele De Gregorio, Luigi Di Stefano (2020). Shooting Labels: 3D Semantic Labeling by Virtual Reality. Institute of Electrical and Electronics Engineers Inc. [10.1109/AIVR50618.2020.00027].

Shooting Labels: 3D Semantic Labeling by Virtual Reality

Pierluigi Zama Ramirez
;
Daniele De Gregorio;Luigi Di Stefano
2020

Abstract

Availability of a few, large-size, annotated datasets, like ImageNet, Pascal VOC and COCO, has lead deep learning to revolutionize computer vision research by achieving astonishing results in several vision tasks. We argue that new tools to facilitate generation of annotated datasets may help spreading data-driven AI throughout applications and domains. In this work we propose Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game. Our tool allows for semantically labeling large scale environments very expeditiously, whatever the nature of the 3D data at hand (e.g. point clouds, mesh). Furthermore, Shooting Labels efficiently integrates multiusers annotations to improve the labeling accuracy automatically and compute a label uncertainty map. Besides, within our framework the 3D annotations can be projected into 2D images, thereby speeding up also a notoriously slow and expensive task such as pixel-wise semantic labeling. We demonstrate the accuracy and efficiency of our tool in two different scenarios: an indoor workspace provided by Matterport3D and a large-scale outdoor environment reconstructed from 1000+ KITTI images.
2020
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2020
99
106
Pierluigi Zama Ramirez, Claudio Paternesi, Luigi Di Lella, Daniele De Gregorio, Luigi Di Stefano (2020). Shooting Labels: 3D Semantic Labeling by Virtual Reality. Institute of Electrical and Electronics Engineers Inc. [10.1109/AIVR50618.2020.00027].
Pierluigi Zama Ramirez; Claudio Paternesi; Luigi Di Lella; Daniele De Gregorio; Luigi Di Stefano
File in questo prodotto:
File Dimensione Formato  
main.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 3.09 MB
Formato Adobe PDF
3.09 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/806564
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 10
social impact