Language is typically modelled with discrete sequences. However, the most successful approaches to language modelling, namely neural networks, are continuous and smooth function approximators. In this work, we show that Transformerbased language models implicitly learn to represent sentences as continuous-time functions defined over a continuous input space. This phenomenon occurs in most state-of-the-art Large Language Models (LLMs), including Llama2, Llama3, Phi3, Gemma, Gemma2, and Mistral, and suggests that LLMs reason about language in ways that fundamentally differ from humans. Our work formally extends Transformers to capture the nuances of time and space continuity in both input and output space. Our results challenge the traditional interpretation of how LLMs understand language, with several linguistic and engineering implications.

Marro, S., Evangelista, D., Angelo Huang, X., La Malfa, E., Lombardi, M., Wooldridge, M.J. (2025). Language Models Are Implicitly Continuous.

Language Models Are Implicitly Continuous

Davide Evangelista;Michele Lombardi;
2025

Abstract

Language is typically modelled with discrete sequences. However, the most successful approaches to language modelling, namely neural networks, are continuous and smooth function approximators. In this work, we show that Transformerbased language models implicitly learn to represent sentences as continuous-time functions defined over a continuous input space. This phenomenon occurs in most state-of-the-art Large Language Models (LLMs), including Llama2, Llama3, Phi3, Gemma, Gemma2, and Mistral, and suggests that LLMs reason about language in ways that fundamentally differ from humans. Our work formally extends Transformers to capture the nuances of time and space continuity in both input and output space. Our results challenge the traditional interpretation of how LLMs understand language, with several linguistic and engineering implications.
2025
The Thirteenth International Conference on Learning Representations
1
41
Marro, S., Evangelista, D., Angelo Huang, X., La Malfa, E., Lombardi, M., Wooldridge, M.J. (2025). Language Models Are Implicitly Continuous.
Marro, Samuele; Evangelista, Davide; Angelo Huang, X.; La Malfa, Emanuele; Lombardi, Michele; Wooldridge, Michael J.
File in questo prodotto:
File Dimensione Formato  
11994_Language_Models_Are_Impl.pdf

accesso aperto

Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 6.41 MB
Formato Adobe PDF
6.41 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1032974
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact