Depth estimation from a single image represents a very exciting challenge in computer vision. While other image-based depth sensing techniques leverage on the geometry between different viewpoints (e.g., stereo or structure from motion), the lack of these cues within a single image renders ill-posed the monocular depth estimation task. For inference, state-of-the-art encoder-decoder architectures for monocular depth estimation rely on effective feature representations learned at training time. For unsupervised training of these models, geometry has been effectively exploited by suitable images warping losses computed from views acquired by a stereo rig or a moving camera. In this paper, we make a further step forward showing that learning semantic information from images enables to improve effectively monocular depth estimation as well. In particular, by leveraging on semantically labeled images together with unsupervised signals gained by geometry through an image warping loss, we propose a deep learning approach aimed at joint semantic segmentation and depth estimation. Our overall learning framework is semi-supervised, as we deploy groundtruth data only in the semantic domain. At training time, our network learns a common feature representation for both tasks and a novel cross-task loss function is proposed. The experimental findings show how, jointly tackling depth prediction and semantic segmentation, allows to improve depth estimation accuracy. In particular, on the KITTI dataset our network outperforms state-of-the-art methods for monocular depth estimation.

Geometry meets semantic for semi-supervised monocular depth estimation / P. Zama Ramirez, M. Poggi, F. Tosi, S. Mattoccia, L. Di Stefano. - ELETTRONICO. - (2019), pp. 298-313. (Intervento presentato al convegno 14th Asian Conference on Computer Vision (ACCV) tenutosi a Perth, Australia nel December 2-6, 2018) [10.1007/978-3-030-20893-6_19].

Geometry meets semantic for semi-supervised monocular depth estimation

P. Zama Ramirez;M. Poggi;F. Tosi;S. Mattoccia;L. Di Stefano
2019

Abstract

Depth estimation from a single image represents a very exciting challenge in computer vision. While other image-based depth sensing techniques leverage on the geometry between different viewpoints (e.g., stereo or structure from motion), the lack of these cues within a single image renders ill-posed the monocular depth estimation task. For inference, state-of-the-art encoder-decoder architectures for monocular depth estimation rely on effective feature representations learned at training time. For unsupervised training of these models, geometry has been effectively exploited by suitable images warping losses computed from views acquired by a stereo rig or a moving camera. In this paper, we make a further step forward showing that learning semantic information from images enables to improve effectively monocular depth estimation as well. In particular, by leveraging on semantically labeled images together with unsupervised signals gained by geometry through an image warping loss, we propose a deep learning approach aimed at joint semantic segmentation and depth estimation. Our overall learning framework is semi-supervised, as we deploy groundtruth data only in the semantic domain. At training time, our network learns a common feature representation for both tasks and a novel cross-task loss function is proposed. The experimental findings show how, jointly tackling depth prediction and semantic segmentation, allows to improve depth estimation accuracy. In particular, on the KITTI dataset our network outperforms state-of-the-art methods for monocular depth estimation.
2019
Proceedings of the 14th Asian Conference on Computer Vision (ACCV)
298
313
Geometry meets semantic for semi-supervised monocular depth estimation / P. Zama Ramirez, M. Poggi, F. Tosi, S. Mattoccia, L. Di Stefano. - ELETTRONICO. - (2019), pp. 298-313. (Intervento presentato al convegno 14th Asian Conference on Computer Vision (ACCV) tenutosi a Perth, Australia nel December 2-6, 2018) [10.1007/978-3-030-20893-6_19].
P. Zama Ramirez, M. Poggi, F. Tosi, S. Mattoccia, L. Di Stefano
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/653882
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 31
social impact