Conceptual Metaphor Theory suggests that human thought is largely metaphorical, mapping concrete experiences onto abstract concepts. Translating is often understood through metaphors like translating is building bridges, which shape how it is approached but fail to capture its full complexity. Similarly, the metaphor the brain is a computer oversimplifies human cognition, ignoring its dynamic, adaptable, and context-dependent nature. Artificial neural networks, though loosely inspired by biological systems, rely on statistical patterns rather than genuine understanding. Large Language Models (LLMs) excel in producing fluent drafts but struggle with nuanced, context-dependent tasks. Misconceptions about AI capabilities often stem from oversimplified metaphors, fostering unrealistic expectations of replacing humans. Historical cycles of over-hyped machine translation breakthroughs highlight the persistence of such misconceptions. LLMs largely renew existing technologies rather than transforming the market. While they enhance translation workflows, they also increase reliance on the less rewarding post-editing work. Techno hype is also leading to declining translator enrollment and the closure of academic programs, even as market projections suggest strong growth. Despite these challenges, humans remain essential for managing ambiguity, integrating context, and making ethical decisions. Progress in multilectal communication and in AI would benefit from abandoning simplistic, binary views of humans versus machines.

Munoz Martin, R. (2025). Do translators dream of electric brains?. FACHSPRACHE, 47(1-2), 88-108.

Do translators dream of electric brains?

Munoz Martin, Ricardo
2025

Abstract

Conceptual Metaphor Theory suggests that human thought is largely metaphorical, mapping concrete experiences onto abstract concepts. Translating is often understood through metaphors like translating is building bridges, which shape how it is approached but fail to capture its full complexity. Similarly, the metaphor the brain is a computer oversimplifies human cognition, ignoring its dynamic, adaptable, and context-dependent nature. Artificial neural networks, though loosely inspired by biological systems, rely on statistical patterns rather than genuine understanding. Large Language Models (LLMs) excel in producing fluent drafts but struggle with nuanced, context-dependent tasks. Misconceptions about AI capabilities often stem from oversimplified metaphors, fostering unrealistic expectations of replacing humans. Historical cycles of over-hyped machine translation breakthroughs highlight the persistence of such misconceptions. LLMs largely renew existing technologies rather than transforming the market. While they enhance translation workflows, they also increase reliance on the less rewarding post-editing work. Techno hype is also leading to declining translator enrollment and the closure of academic programs, even as market projections suggest strong growth. Despite these challenges, humans remain essential for managing ambiguity, integrating context, and making ethical decisions. Progress in multilectal communication and in AI would benefit from abandoning simplistic, binary views of humans versus machines.
2025
Munoz Martin, R. (2025). Do translators dream of electric brains?. FACHSPRACHE, 47(1-2), 88-108.
Munoz Martin, Ricardo
File in questo prodotto:
File Dimensione Formato  
Fachsprache 1-2_25_Muñoz.pdf

embargo fino al 31/10/2026

Descrizione: not totally final, some minor format modifications
Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per accesso libero gratuito
Dimensione 607.23 kB
Formato Adobe PDF
607.23 kB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1014153
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact