It was recently shown that architectural, regularization and rehearsal strategies can be used to train deep models sequentially on a number of disjoint tasks without forgetting previously acquired knowledge. However, these strategies are still unsatisfactory if the tasks are not disjoint but constitute a single incremental task (e.g., class-incremental learning). In this paper we point out the differences between multi-task and single-incremental-task scenarios and show that well-known approaches such as LWF, EWC and SI are not ideal for incremental task scenarios. A new approach, denoted as AR1, combining architectural and regularization strategies is then specifically proposed. AR1 overhead (in terms of memory and computation) is very small thus making it suitable for online learning. When tested on CORe50 and iCIFAR-100, AR1 outperformed existing regularization strategies by a good margin.

Maltoni, D., Lomonaco, V. (2019). Continuous learning in single-incremental-task scenarios. NEURAL NETWORKS, 116, 56-73 [10.1016/j.neunet.2019.03.010].

Continuous learning in single-incremental-task scenarios

Maltoni, Davide;Lomonaco, Vincenzo
2019

Abstract

It was recently shown that architectural, regularization and rehearsal strategies can be used to train deep models sequentially on a number of disjoint tasks without forgetting previously acquired knowledge. However, these strategies are still unsatisfactory if the tasks are not disjoint but constitute a single incremental task (e.g., class-incremental learning). In this paper we point out the differences between multi-task and single-incremental-task scenarios and show that well-known approaches such as LWF, EWC and SI are not ideal for incremental task scenarios. A new approach, denoted as AR1, combining architectural and regularization strategies is then specifically proposed. AR1 overhead (in terms of memory and computation) is very small thus making it suitable for online learning. When tested on CORe50 and iCIFAR-100, AR1 outperformed existing regularization strategies by a good margin.
2019
Maltoni, D., Lomonaco, V. (2019). Continuous learning in single-incremental-task scenarios. NEURAL NETWORKS, 116, 56-73 [10.1016/j.neunet.2019.03.010].
Maltoni, Davide; Lomonaco, Vincenzo
File in questo prodotto:
File Dimensione Formato  
NN116-2019.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione 3.42 MB
Formato Adobe PDF
3.42 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/685372
Citazioni
  • ???jsp.display-item.citation.pmc??? 12
  • Scopus 175
  • ???jsp.display-item.citation.isi??? 140
social impact