Driven by deep learning breakthroughs, natural language generation (NLG) models have been at the center of steady progress in the last few years, with a ubiquitous task influence. However, since our ability to generate human-indistinguishable artificial text lags behind our capacity to assess it, it is paramount to develop and apply even better automatic evaluation metrics. To facilitate researchers to judge the effectiveness of their models broadly, we introduce NLG-Metricverse—an end-to-end open-source library for NLG evaluation based on Python. Our framework provides a living collection of NLG metrics in a unified and easy-to-use environment, supplying tools to efficiently apply, analyze, compare, and visualize them. This includes (i) the extensive support to heterogeneous automatic metrics with n-arity management, (ii) the meta-evaluation upon individual performance, metric-metric and metric-human correlations, (iii) graphical interpretations for helping humans better gain score intuitions, (iv) formal categorization and convenient documentation to accelerate metrics understanding. NLG-Metricverse aims to increase the comparability and replicability of NLG research, hopefully stimulating new contributions in the area.

NLG-Metricverse: An End-to-End Library for Evaluating Natural Language Generation / Giacomo Frisoni, Antonella Carbonaro, Gianluca Moro, Andrea Zammarchi, Marco Avagnano. - ELETTRONICO. - (2022), pp. 3465-3479. (Intervento presentato al convegno 29th International Conference on Computational Linguistics (COLING 2022) tenutosi a Gyeongju, Republic of Korea nel 12–17 October, 2022).

NLG-Metricverse: An End-to-End Library for Evaluating Natural Language Generation

Giacomo Frisoni;Antonella Carbonaro
;
Gianluca Moro;
2022

Abstract

Driven by deep learning breakthroughs, natural language generation (NLG) models have been at the center of steady progress in the last few years, with a ubiquitous task influence. However, since our ability to generate human-indistinguishable artificial text lags behind our capacity to assess it, it is paramount to develop and apply even better automatic evaluation metrics. To facilitate researchers to judge the effectiveness of their models broadly, we introduce NLG-Metricverse—an end-to-end open-source library for NLG evaluation based on Python. Our framework provides a living collection of NLG metrics in a unified and easy-to-use environment, supplying tools to efficiently apply, analyze, compare, and visualize them. This includes (i) the extensive support to heterogeneous automatic metrics with n-arity management, (ii) the meta-evaluation upon individual performance, metric-metric and metric-human correlations, (iii) graphical interpretations for helping humans better gain score intuitions, (iv) formal categorization and convenient documentation to accelerate metrics understanding. NLG-Metricverse aims to increase the comparability and replicability of NLG research, hopefully stimulating new contributions in the area.
2022
Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022
3465
3479
NLG-Metricverse: An End-to-End Library for Evaluating Natural Language Generation / Giacomo Frisoni, Antonella Carbonaro, Gianluca Moro, Andrea Zammarchi, Marco Avagnano. - ELETTRONICO. - (2022), pp. 3465-3479. (Intervento presentato al convegno 29th International Conference on Computational Linguistics (COLING 2022) tenutosi a Gyeongju, Republic of Korea nel 12–17 October, 2022).
Giacomo Frisoni, Antonella Carbonaro, Gianluca Moro, Andrea Zammarchi, Marco Avagnano
File in questo prodotto:
File Dimensione Formato  
2022.coling-1.306.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 1.22 MB
Formato Adobe PDF
1.22 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/896084
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact