High-resolution Digital Terrain Models (DTMs) are essential for precise terrain analysis, yet their production remains constrained by the high cost and limited coverage of LiDAR surveys. This study introduces a deep learning framework based on a modified Residual Channel Attention Network (RCAN) to super-resolve 10 m DTMs to 1 m resolution. The model was trained and validated on a 568 km2 LiDAR-derived dataset using custom elevation-aware loss functions that integrate elevation accuracy (L1), slope gradients, and multi-scale structural components to preserve terrain realism and vertical precision. Performance was evaluated across 257 independent test tiles representing flat, hilly, and mountainous terrains. A balanced loss configuration (α = 0.5, γ = 0.5) achieved the best results, yielding Mean Absolute Error (MAE) as low as 0.83 m and Root Mean Square Error (RMSE) of 1.14–1.15 m, with near-zero bias (−0.04 m). Errors increased moderately in mountainous areas (MAE = 1.29–1.41 m, RMSE = 1.84 m), confirming the greater difficulty of rugged terrain. Overall, the approach demonstrates strong potential for operational applications in geomorphology, hydrology, and landscape monitoring, offering an effective solution for high-resolution DTM generation where LiDAR data are unavailable.
Helmy, M., Mandanici, E., Vittuari, L., Bitelli, G. (2026). Super-Resolving Digital Terrain Models Using a Modified RCAN. REMOTE SENSING, 18(1), 1-27 [10.3390/rs18010020].
Super-Resolving Digital Terrain Models Using a Modified RCAN
Helmy, Mohamed
;Mandanici, Emanuele;Vittuari, Luca;Bitelli, Gabriele
2026
Abstract
High-resolution Digital Terrain Models (DTMs) are essential for precise terrain analysis, yet their production remains constrained by the high cost and limited coverage of LiDAR surveys. This study introduces a deep learning framework based on a modified Residual Channel Attention Network (RCAN) to super-resolve 10 m DTMs to 1 m resolution. The model was trained and validated on a 568 km2 LiDAR-derived dataset using custom elevation-aware loss functions that integrate elevation accuracy (L1), slope gradients, and multi-scale structural components to preserve terrain realism and vertical precision. Performance was evaluated across 257 independent test tiles representing flat, hilly, and mountainous terrains. A balanced loss configuration (α = 0.5, γ = 0.5) achieved the best results, yielding Mean Absolute Error (MAE) as low as 0.83 m and Root Mean Square Error (RMSE) of 1.14–1.15 m, with near-zero bias (−0.04 m). Errors increased moderately in mountainous areas (MAE = 1.29–1.41 m, RMSE = 1.84 m), confirming the greater difficulty of rugged terrain. Overall, the approach demonstrates strong potential for operational applications in geomorphology, hydrology, and landscape monitoring, offering an effective solution for high-resolution DTM generation where LiDAR data are unavailable.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


