Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems. One of the most effective strategies to control catastrophic forgetting, the Achilles’ heel of continual learning, is storing part of the old data and replaying them interleaved with new experiences (also known as the replay approach). Generative replay, which is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional data. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail.

Generative negative replay for continual learning / Graffieti, Gabriele; Maltoni, Davide; Pellegrini, Lorenzo; Lomonaco, Vincenzo. - In: NEURAL NETWORKS. - ISSN 0893-6080. - ELETTRONICO. - 162:(2023), pp. 369-383. [10.1016/j.neunet.2023.03.006]

Generative negative replay for continual learning

Graffieti, Gabriele;Maltoni, Davide;Pellegrini, Lorenzo;Lomonaco, Vincenzo
2023

Abstract

Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems. One of the most effective strategies to control catastrophic forgetting, the Achilles’ heel of continual learning, is storing part of the old data and replaying them interleaved with new experiences (also known as the replay approach). Generative replay, which is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional data. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail.
2023
Generative negative replay for continual learning / Graffieti, Gabriele; Maltoni, Davide; Pellegrini, Lorenzo; Lomonaco, Vincenzo. - In: NEURAL NETWORKS. - ISSN 0893-6080. - ELETTRONICO. - 162:(2023), pp. 369-383. [10.1016/j.neunet.2023.03.006]
Graffieti, Gabriele; Maltoni, Davide; Pellegrini, Lorenzo; Lomonaco, Vincenzo
File in questo prodotto:
File Dimensione Formato  
Generative_Negative_Replay__UniBo_IRIS_version.pdf

Open Access dal 10/03/2024

Tipo: Postprint
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/920875
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact