Scenarios in which restrictions in data transfer and storage limit the possibility to compose a single dataset – also exploiting different data sources – to perform a batch-based training procedure, make the development of robust models particularly challenging. We hypothesize that the recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites. Indeed, a basic assumption of CL is that once a model has been trained, old data can no longer be used in successive training iterations and in principle can be deleted. Therefore, in this paper, we investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available. Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms. Then, we investigate its usage and parametrization in Morphing Attack Detection and Object Classification tasks, specifically with respect to the amount of new training data that became available.

Detecting Morphing Attacks via Continual Incremental Training / Lorenzo Pellegrini, Guido Borghi, Annalisa Franco, Davide Maltoni. - ELETTRONICO. - (2023), pp. 0-9. (Intervento presentato al convegno 2023 IEEE International Joint Conference on Biometrics tenutosi a Ljubiana nel 25/09/2023).

Detecting Morphing Attacks via Continual Incremental Training

Lorenzo Pellegrini;Guido Borghi;Annalisa Franco;Davide Maltoni
2023

Abstract

Scenarios in which restrictions in data transfer and storage limit the possibility to compose a single dataset – also exploiting different data sources – to perform a batch-based training procedure, make the development of robust models particularly challenging. We hypothesize that the recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites. Indeed, a basic assumption of CL is that once a model has been trained, old data can no longer be used in successive training iterations and in principle can be deleted. Therefore, in this paper, we investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available. Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms. Then, we investigate its usage and parametrization in Morphing Attack Detection and Object Classification tasks, specifically with respect to the amount of new training data that became available.
2023
Proceedings of 2023 IEEE International Joint Conference on Biometrics
0
9
Detecting Morphing Attacks via Continual Incremental Training / Lorenzo Pellegrini, Guido Borghi, Annalisa Franco, Davide Maltoni. - ELETTRONICO. - (2023), pp. 0-9. (Intervento presentato al convegno 2023 IEEE International Joint Conference on Biometrics tenutosi a Ljubiana nel 25/09/2023).
Lorenzo Pellegrini, Guido Borghi, Annalisa Franco, Davide Maltoni
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/949498
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact