State-of-the-art methods to infer dense and accurate depth measurements from images rely on deep CNN models trained in an end-to-end fashion on a significant amount of data. However, despite the outstanding performance achieved, these frameworks suffer a drastic drop in accuracy when dealing with unseen environments much different, concerning appearance (e.g., synthetic vs. real) or context (e.g., indoor vs. outdoor), from those observed during the training phase. Such domain shift issue is usually softened by fine-tuning on smaller sets of images with depth labels acquired in the target domain with active sensors (e.g., LiDAR). However, relying on such supervised labeled data is seldom feasible in practical applications. Therefore, we propose an effective unsupervised domain adaptation technique enabling to overcome the domain shift problem without requiring any groundtruth label. Our method, deploying much more accessible to obtain stereo pairs, leverages traditional and not learning-based stereo algorithms to produce disparity/depth labels and on confidence measures to assess their degree of reliability. With these cues, we can fine-tune deep models through a novel confidence-guided loss function, neglecting the effect of outliers gathered from the output of conventional stereo algorithms.

Tonioni, A., Poggi, M., Mattoccia, S., Di Stefano, L. (2020). Unsupervised Domain Adaptation for Depth Prediction from Images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 42(10), 2396-2409 [10.1109/TPAMI.2019.2940948].

Unsupervised Domain Adaptation for Depth Prediction from Images

Tonioni, Alessio;Poggi, Matteo;Mattoccia, Stefano;Di Stefano, Luigi
2020

Abstract

State-of-the-art methods to infer dense and accurate depth measurements from images rely on deep CNN models trained in an end-to-end fashion on a significant amount of data. However, despite the outstanding performance achieved, these frameworks suffer a drastic drop in accuracy when dealing with unseen environments much different, concerning appearance (e.g., synthetic vs. real) or context (e.g., indoor vs. outdoor), from those observed during the training phase. Such domain shift issue is usually softened by fine-tuning on smaller sets of images with depth labels acquired in the target domain with active sensors (e.g., LiDAR). However, relying on such supervised labeled data is seldom feasible in practical applications. Therefore, we propose an effective unsupervised domain adaptation technique enabling to overcome the domain shift problem without requiring any groundtruth label. Our method, deploying much more accessible to obtain stereo pairs, leverages traditional and not learning-based stereo algorithms to produce disparity/depth labels and on confidence measures to assess their degree of reliability. With these cues, we can fine-tune deep models through a novel confidence-guided loss function, neglecting the effect of outliers gathered from the output of conventional stereo algorithms.
2020
Tonioni, A., Poggi, M., Mattoccia, S., Di Stefano, L. (2020). Unsupervised Domain Adaptation for Depth Prediction from Images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 42(10), 2396-2409 [10.1109/TPAMI.2019.2940948].
Tonioni, Alessio; Poggi, Matteo; Mattoccia, Stefano; Di Stefano, Luigi
File in questo prodotto:
File Dimensione Formato  
IEEETransactions-on-Pattern-Analysis-and-Machine-Intelligence42-2020.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 5.97 MB
Formato Adobe PDF
5.97 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/735517
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 56
  • ???jsp.display-item.citation.isi??? 42
social impact