High frame rate and accurate depth estimation plays an important role in several tasks crucial to robotics and automotive perception. To date, this can be achieved through ToF and LiDAR devices for indoor and outdoor applications, respectively. However, their applicability is limited by low frame rate, energy consumption, and spatial sparsity. Depth on Demand (DoD) allows for accurate temporal and spatial depth densification achieved by exploiting a high frame rate RGB sensor coupled with a potentially lower frame rate and sparse active depth sensor. Our proposal jointly enables lower energy consumption and denser shape reconstruction, by significantly reducing the streaming requirements on the depth sensor thanks to its three core stages: i) multi-modal encoding, ii) iterative multi-modal integration, and iii) depth decoding. We present extended evidence assessing the effectiveness of DoD on indoor and outdoor video datasets, covering both environment scanning and automotive perception use cases.

Conti, A., Poggi, M., Cambareri, V., Mattoccia, S. (2025). Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor. Berlin, Heidelberg : Springer-Verlag [10.1007/978-3-031-73030-6_16].

Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor

Conti, Andrea;Poggi, Matteo;Mattoccia, Stefano
2025

Abstract

High frame rate and accurate depth estimation plays an important role in several tasks crucial to robotics and automotive perception. To date, this can be achieved through ToF and LiDAR devices for indoor and outdoor applications, respectively. However, their applicability is limited by low frame rate, energy consumption, and spatial sparsity. Depth on Demand (DoD) allows for accurate temporal and spatial depth densification achieved by exploiting a high frame rate RGB sensor coupled with a potentially lower frame rate and sparse active depth sensor. Our proposal jointly enables lower energy consumption and denser shape reconstruction, by significantly reducing the streaming requirements on the depth sensor thanks to its three core stages: i) multi-modal encoding, ii) iterative multi-modal integration, and iii) depth decoding. We present extended evidence assessing the effectiveness of DoD on indoor and outdoor video datasets, covering both environment scanning and automotive perception use cases.
2025
Computer Vision – ECCV 2024. 18th European Conference, Milan, Italy, September 29–October 4, 2024. Proceedings, Part LXI
283
302
Conti, A., Poggi, M., Cambareri, V., Mattoccia, S. (2025). Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor. Berlin, Heidelberg : Springer-Verlag [10.1007/978-3-031-73030-6_16].
Conti, Andrea; Poggi, Matteo; Cambareri, Valerio; Mattoccia, Stefano
File in questo prodotto:
File Dimensione Formato  
07836.pdf

embargo fino al 23/11/2025

Descrizione: ECCV 2024
Tipo: Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza: Licenza per accesso libero gratuito
Dimensione 9.81 MB
Formato Adobe PDF
9.81 MB Adobe PDF   Visualizza/Apri   Contatta l'autore
636739_1_En_16_MOESM1_ESM.zip

accesso aperto

Tipo: File Supplementare
Licenza: Licenza per accesso libero gratuito
Dimensione 102.21 MB
Formato Zip File
102.21 MB Zip File Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1010345
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact