Obtaining accurate depth measurements out of a single image represents a fascinating solution to 3D sensing. CNNs led to considerable improvements in this field, and recent trends replaced the need for ground-truth labels with geometry-guided image reconstruction signals enabling unsupervised training. Currently, for this purpose, state-ofthe-art techniques rely on images acquired with a binocular stereo rig to predict inverse depth (i.e., disparity) according to the aforementioned supervision principle. However, these methods suffer from well-known problems near occlusions, left image border, etc inherited from the stereo setup. Therefore, in this paper, we tackle these issues by moving to a trinocular domain for training. Assuming the central image as the reference, we train a CNN to infer disparity representations pairing such image with frames on its left and right side. This strategy allows obtaining depth maps not affected by typical stereo artifacts. Moreover, being trinocular datasets seldom available, we introduce a novel interleaved training procedure enabling to enforce the trinocular assumption outlined from current binocular datasets. Exhaustive experimental results on the KITTI dataset confirm that our proposal outperforms state-of-the-art methods for unsupervised monocular depth estimation trained on binocular stereo pairs as well as any known methods relying on other cues.

Learning Monocular Depth Estimation with Unsupervised Trinocular Assumptions / Poggi, Matteo; Tosi, Fabio; Mattoccia, Stefano. - ELETTRONICO. - (2018), pp. 324-333. (Intervento presentato al convegno 6th international conference on 3D Vision tenutosi a Verona nel September 5-8, 2018) [10.1109/3DV.2018.00045].

Learning Monocular Depth Estimation with Unsupervised Trinocular Assumptions

Poggi, Matteo;TOSI, FABIO;Mattoccia, Stefano
2018

Abstract

Obtaining accurate depth measurements out of a single image represents a fascinating solution to 3D sensing. CNNs led to considerable improvements in this field, and recent trends replaced the need for ground-truth labels with geometry-guided image reconstruction signals enabling unsupervised training. Currently, for this purpose, state-ofthe-art techniques rely on images acquired with a binocular stereo rig to predict inverse depth (i.e., disparity) according to the aforementioned supervision principle. However, these methods suffer from well-known problems near occlusions, left image border, etc inherited from the stereo setup. Therefore, in this paper, we tackle these issues by moving to a trinocular domain for training. Assuming the central image as the reference, we train a CNN to infer disparity representations pairing such image with frames on its left and right side. This strategy allows obtaining depth maps not affected by typical stereo artifacts. Moreover, being trinocular datasets seldom available, we introduce a novel interleaved training procedure enabling to enforce the trinocular assumption outlined from current binocular datasets. Exhaustive experimental results on the KITTI dataset confirm that our proposal outperforms state-of-the-art methods for unsupervised monocular depth estimation trained on binocular stereo pairs as well as any known methods relying on other cues.
2018
Proceedings of the 6th international conference on 3D Vision
324
333
Learning Monocular Depth Estimation with Unsupervised Trinocular Assumptions / Poggi, Matteo; Tosi, Fabio; Mattoccia, Stefano. - ELETTRONICO. - (2018), pp. 324-333. (Intervento presentato al convegno 6th international conference on 3D Vision tenutosi a Verona nel September 5-8, 2018) [10.1109/3DV.2018.00045].
Poggi, Matteo; Tosi, Fabio; Mattoccia, Stefano
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/653872
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 117
  • ???jsp.display-item.citation.isi??? 81
social impact