Deep convolutional neural networks trained end-to-end are the state-of-the-art methods to regress dense disparity maps from stereo pairs. These models, however, suffer from a notable decrease in accuracy when exposed to scenarios significantly different from the training set (e.g., real vs synthetic images, etc.). We argue that it is extremely unlikely to gather enough samples to achieve effective training/tuning in any target domain, thus making this setup impractical for many applications. Instead, we propose to perform unsupervised and continuous online adaptation of a deep stereo network, which allows for preserving its accuracy in any environment. However, this strategy is extremely computationally demanding and thus prevents real-time inference. We address this issue introducing a new lightweight, yet effective, deep stereo architecture, Modularly ADaptive Network(MADNet), and developing a Modular ADaptation (MAD) algorithm, which independently trains sub-portions of the network. By deploying MADNet together with MAD we introduce the first real-time self-adaptive deep stereo system enabling competitive performance on heterogeneous datasets. Our code is publicly available at https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stereo.

A. Tonioni, F.T. (2019). Real-time self-adaptive deep stereo. New York : IEEE/CVF [10.1109/CVPR.2019.00028].

Real-time self-adaptive deep stereo

A. Tonioni;F. Tosi;M. Poggi;S. Mattoccia;L. Di Stefano
2019

Abstract

Deep convolutional neural networks trained end-to-end are the state-of-the-art methods to regress dense disparity maps from stereo pairs. These models, however, suffer from a notable decrease in accuracy when exposed to scenarios significantly different from the training set (e.g., real vs synthetic images, etc.). We argue that it is extremely unlikely to gather enough samples to achieve effective training/tuning in any target domain, thus making this setup impractical for many applications. Instead, we propose to perform unsupervised and continuous online adaptation of a deep stereo network, which allows for preserving its accuracy in any environment. However, this strategy is extremely computationally demanding and thus prevents real-time inference. We address this issue introducing a new lightweight, yet effective, deep stereo architecture, Modularly ADaptive Network(MADNet), and developing a Modular ADaptation (MAD) algorithm, which independently trains sub-portions of the network. By deploying MADNet together with MAD we introduce the first real-time self-adaptive deep stereo system enabling competitive performance on heterogeneous datasets. Our code is publicly available at https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stereo.
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
195
204
A. Tonioni, F.T. (2019). Real-time self-adaptive deep stereo. New York : IEEE/CVF [10.1109/CVPR.2019.00028].
A. Tonioni, F. Tosi , M. Poggi, S. Mattoccia, L. Di Stefano
File in questo prodotto:
File Dimensione Formato  
Tonioni_Real-Time_Self-Adaptive_Deep_Stereo_CVPR_2019_paper.pdf

accesso aperto

Descrizione: Oral presentation
Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.26 MB
Formato Adobe PDF
1.26 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/710374
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 224
  • ???jsp.display-item.citation.isi??? 192
social impact