We release VS-Sim, a synthetic dataset of road scene images that can be used to study the robustness to viewpoint shift of Computer Vision models for several tasks. Our dataset includes images from a frontal camera in different positions, and annotations for tasks both in frontal view (depth, semantic segmentation) and Bird’s Eye View (semantic segmentation). The paper also includes an analysis of the robustness of a Bird’s Eye View segmentation model to viewpoint shifts. Experiments indicate that viewpoint shift significantly degrades the performance of a model trained on data from a single viewpoint, and suggest that using multiple viewpoints during training helps to mitigate the performance drop across different scenarios.
Turra, R., Simoncini, M., Monteagudo, H.P., Pjetri, A., Salti, S., Taccari, L. (2026). VS-Sim: A Synthetic Dataset for Viewpoint Shift Robustness. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-032-10185-3_40].
VS-Sim: A Synthetic Dataset for Viewpoint Shift Robustness
Monteagudo, Henrique Piñeiro;Salti, Samuele;
2026
Abstract
We release VS-Sim, a synthetic dataset of road scene images that can be used to study the robustness to viewpoint shift of Computer Vision models for several tasks. Our dataset includes images from a frontal camera in different positions, and annotations for tasks both in frontal view (depth, semantic segmentation) and Bird’s Eye View (semantic segmentation). The paper also includes an analysis of the robustness of a Bird’s Eye View segmentation model to viewpoint shifts. Experiments indicate that viewpoint shift significantly degrades the performance of a model trained on data from a single viewpoint, and suggest that using multiple viewpoints during training helps to mitigate the performance drop across different scenarios.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


