Dense associative memory (DAM) models have been attracting renewed attention since they were shown to be robust to adversarial examples and closely related to cutting edge machine learning paradigms, such as the attention mechanism and generative diffusion. We study a DAM built upon a three-layer Boltzmann machine with Potts hidden units, which represent data clusters and classes. Through a statistical mechanics analysis, we derive saddle-point equations that characterize both the stationary points of DAMs trained on real data and the fixed points of DAMs trained on synthetic data within a teacher-student framework. Based on these results, we propose a novel regularization scheme that makes training significantly more stable. Moreover, we show empirically that our DAM learns interpretable solutions to both supervised and unsupervised classification problems. Pushing our theoretical analysis further, we find that the weights learned by relatively small DAMs correspond to unstable saddle points in larger DAMs. We implement a network-growing algorithm that leverages this saddle-point hierarchy to drastically reduce the computational cost of training DAM.

Thériault, R., Tantari, D. (2026). Saddle hierarchy in dense associative memory. MACHINE LEARNING: SCIENCE AND TECHNOLOGY, 7(1), 1-36 [10.1088/2632-2153/ae3051].

Saddle hierarchy in dense associative memory

Tantari, Daniele
2026

Abstract

Dense associative memory (DAM) models have been attracting renewed attention since they were shown to be robust to adversarial examples and closely related to cutting edge machine learning paradigms, such as the attention mechanism and generative diffusion. We study a DAM built upon a three-layer Boltzmann machine with Potts hidden units, which represent data clusters and classes. Through a statistical mechanics analysis, we derive saddle-point equations that characterize both the stationary points of DAMs trained on real data and the fixed points of DAMs trained on synthetic data within a teacher-student framework. Based on these results, we propose a novel regularization scheme that makes training significantly more stable. Moreover, we show empirically that our DAM learns interpretable solutions to both supervised and unsupervised classification problems. Pushing our theoretical analysis further, we find that the weights learned by relatively small DAMs correspond to unstable saddle points in larger DAMs. We implement a network-growing algorithm that leverages this saddle-point hierarchy to drastically reduce the computational cost of training DAM.
2026
Thériault, R., Tantari, D. (2026). Saddle hierarchy in dense associative memory. MACHINE LEARNING: SCIENCE AND TECHNOLOGY, 7(1), 1-36 [10.1088/2632-2153/ae3051].
Thériault, Robin; Tantari, Daniele
File in questo prodotto:
File Dimensione Formato  
paper.pdf

accesso aperto

Descrizione: paper
Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 3.94 MB
Formato Adobe PDF
3.94 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1038921
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact