Zero-shot learning (ZSL) emerged as a way to classify unseen categories using semantic knowledge from known ones. While widely studied in computer vision, its use in remote sensing (RS) is still limited. Given RS’s high intra-class variability, fine-grained distinctions, and scarce labeled data, ZSL presents a promising classification solution. We investigate the generalisability of ZSL methods in the RS domain, focusing on attribute-level annotations. We build upon existing knowledge in Attribute-based ZSL (ABZSL) to evaluate deep-learning backbone generalizability and robustness across different semantic splits. We extend this framework to RS considering the WHU-RS19 dataset with novel attribute-level annotations, defining the WHU-RS19 ABZSL dataset. These annotations include 38 attributes, providing the first attribute-based benchmark for ZSL in RS. We evaluate generative ZSL methods under different classes and attribute splitting strategies, using features extracted by vision and multimodal backbones. Our results show that ZSL performance is sensitive to the backbone and splitting strategy. We found that DINOv2-based backbones achieved the highest generalization and robustness scores when using specific generative ZSL approaches (i.e. TFVAEGAN) attribute splitting strategies (i.e. PCA attributes splitting) on unseen classes (with a Generalized Harmonic Accuracy Mean of 84.30 and 70.55, respectively, on seen-unseen classes splits of 15-4 and 13-6).

Stacchio, L., Nepi, L., Paolanti, M., Pierdicca, R. (2025). RSplitzero: generalized zero-shot learning in remote sensing across attribute splits with single and multi-modal representations. INTERNATIONAL JOURNAL OF DIGITAL EARTH, 18(2), 1-22 [10.1080/17538947.2025.2551869].

RSplitzero: generalized zero-shot learning in remote sensing across attribute splits with single and multi-modal representations

Stacchio L.
;
2025

Abstract

Zero-shot learning (ZSL) emerged as a way to classify unseen categories using semantic knowledge from known ones. While widely studied in computer vision, its use in remote sensing (RS) is still limited. Given RS’s high intra-class variability, fine-grained distinctions, and scarce labeled data, ZSL presents a promising classification solution. We investigate the generalisability of ZSL methods in the RS domain, focusing on attribute-level annotations. We build upon existing knowledge in Attribute-based ZSL (ABZSL) to evaluate deep-learning backbone generalizability and robustness across different semantic splits. We extend this framework to RS considering the WHU-RS19 dataset with novel attribute-level annotations, defining the WHU-RS19 ABZSL dataset. These annotations include 38 attributes, providing the first attribute-based benchmark for ZSL in RS. We evaluate generative ZSL methods under different classes and attribute splitting strategies, using features extracted by vision and multimodal backbones. Our results show that ZSL performance is sensitive to the backbone and splitting strategy. We found that DINOv2-based backbones achieved the highest generalization and robustness scores when using specific generative ZSL approaches (i.e. TFVAEGAN) attribute splitting strategies (i.e. PCA attributes splitting) on unseen classes (with a Generalized Harmonic Accuracy Mean of 84.30 and 70.55, respectively, on seen-unseen classes splits of 15-4 and 13-6).
2025
Stacchio, L., Nepi, L., Paolanti, M., Pierdicca, R. (2025). RSplitzero: generalized zero-shot learning in remote sensing across attribute splits with single and multi-modal representations. INTERNATIONAL JOURNAL OF DIGITAL EARTH, 18(2), 1-22 [10.1080/17538947.2025.2551869].
Stacchio, L.; Nepi, L.; Paolanti, M.; Pierdicca, R.
File in questo prodotto:
File Dimensione Formato  
h_11585_1028123.pdf

accesso aperto

Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale (CCBYNC)
Dimensione 2.17 MB
Formato Adobe PDF
2.17 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1028123
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact