Many low-level vision tasks, including guided depth super-resolution (GDSR), struggle with the issue of insufficient paired training data. Self-supervised learning is a promising solution, but it remains challenging to upsample depth maps without the explicit supervision of high-resolution target images. To alleviate this problem, we propose a self-supervised depth super-resolution method with contrastive multiview pre-training. Unlike existing contrastive learning methods for classification or segmentation tasks, our strategy can be applied to regression tasks even when trained on a small-scale dataset and can reduce information redundancy by extracting unique features from the guide. Furthermore, we propose a novel mutual modulation scheme that can effectively compute the local spatial correlation between cross-modal features. Exhaustive experiments demonstrate that our method attains superior performance with respect to state-of-the-art GDSR methods and exhibits good generalization to other modalities.

Self-supervised depth super-resolution with contrastive multiview pre-training / Qiao, Xin; Ge, Chenyang; Zhao, Chaoqiang; Tosi, Fabio; Poggi, Matteo; Mattoccia, Stefano. - In: NEURAL NETWORKS. - ISSN 0893-6080. - ELETTRONICO. - 168:(2023), pp. 223-236. [10.1016/j.neunet.2023.09.023]

Self-supervised depth super-resolution with contrastive multiview pre-training

Zhao, Chaoqiang;Tosi, Fabio;Poggi, Matteo;Mattoccia, Stefano
2023

Abstract

Many low-level vision tasks, including guided depth super-resolution (GDSR), struggle with the issue of insufficient paired training data. Self-supervised learning is a promising solution, but it remains challenging to upsample depth maps without the explicit supervision of high-resolution target images. To alleviate this problem, we propose a self-supervised depth super-resolution method with contrastive multiview pre-training. Unlike existing contrastive learning methods for classification or segmentation tasks, our strategy can be applied to regression tasks even when trained on a small-scale dataset and can reduce information redundancy by extracting unique features from the guide. Furthermore, we propose a novel mutual modulation scheme that can effectively compute the local spatial correlation between cross-modal features. Exhaustive experiments demonstrate that our method attains superior performance with respect to state-of-the-art GDSR methods and exhibits good generalization to other modalities.
2023
Self-supervised depth super-resolution with contrastive multiview pre-training / Qiao, Xin; Ge, Chenyang; Zhao, Chaoqiang; Tosi, Fabio; Poggi, Matteo; Mattoccia, Stefano. - In: NEURAL NETWORKS. - ISSN 0893-6080. - ELETTRONICO. - 168:(2023), pp. 223-236. [10.1016/j.neunet.2023.09.023]
Qiao, Xin; Ge, Chenyang; Zhao, Chaoqiang; Tosi, Fabio; Poggi, Matteo; Mattoccia, Stefano
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/957750
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact