Autonomous onboard estimation of the pose from a two dimensional image is a key technology for many space missions requiring an active chaser to service an uncooperative target. While neural networks have shown superior performances with respect to classical image processing algorithms, their adoption is still facing two main limitations: poor robustness on real pictures when trained on synthetic images and scarce accuracy-latency trade off. Recently, vision Transformers have emerged as a promising approach to domain shift in computer vision owing to their ability to model long range dependencies. In this work we therefore provide a study on vision Transformers as a solution to bridging domain gap in the framework of satellite pose estimation. We first present an algorithm leveraging Swin Transformers and adversarial domain adaptation which achieved the fourth and fifth places at the 2021 edition of the ESA’s Satellite Pose Estimation Competition, challenging researchers to develop solutions capable of bridging domain gap. We provide a summary of the main steps we followed showing how larger models and data augmentations contributed to the final accuracy. We then illustrate the results of a subsequent development which tackles the limitations of our first solution, proposing a lightweight variant of our algorithm, not requiring access to test images. Our results show that vision Transformers can be a suitable tool for bridging domain gap in satellite pose estimation, although with limited scaling capabilities.

Investigating Vision Transformers for Bridging Domain Gap in Satellite Pose Estimation / Lotti A.; Modenini D.; Tortora P.. - ELETTRONICO. - 1088:(2023), pp. 299-314. (Intervento presentato al convegno The Use of Artificial Intelligence for Space Applications. tenutosi a Reggo Calabria nel 1-3 September 2022) [10.1007/978-3-031-25755-1_20].

Investigating Vision Transformers for Bridging Domain Gap in Satellite Pose Estimation

Lotti A.;Modenini D.;Tortora P.
2023

Abstract

Autonomous onboard estimation of the pose from a two dimensional image is a key technology for many space missions requiring an active chaser to service an uncooperative target. While neural networks have shown superior performances with respect to classical image processing algorithms, their adoption is still facing two main limitations: poor robustness on real pictures when trained on synthetic images and scarce accuracy-latency trade off. Recently, vision Transformers have emerged as a promising approach to domain shift in computer vision owing to their ability to model long range dependencies. In this work we therefore provide a study on vision Transformers as a solution to bridging domain gap in the framework of satellite pose estimation. We first present an algorithm leveraging Swin Transformers and adversarial domain adaptation which achieved the fourth and fifth places at the 2021 edition of the ESA’s Satellite Pose Estimation Competition, challenging researchers to develop solutions capable of bridging domain gap. We provide a summary of the main steps we followed showing how larger models and data augmentations contributed to the final accuracy. We then illustrate the results of a subsequent development which tackles the limitations of our first solution, proposing a lightweight variant of our algorithm, not requiring access to test images. Our results show that vision Transformers can be a suitable tool for bridging domain gap in satellite pose estimation, although with limited scaling capabilities.
2023
Studies in Computational Intelligence
299
314
Investigating Vision Transformers for Bridging Domain Gap in Satellite Pose Estimation / Lotti A.; Modenini D.; Tortora P.. - ELETTRONICO. - 1088:(2023), pp. 299-314. (Intervento presentato al convegno The Use of Artificial Intelligence for Space Applications. tenutosi a Reggo Calabria nel 1-3 September 2022) [10.1007/978-3-031-25755-1_20].
Lotti A.; Modenini D.; Tortora P.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/939920
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact