Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.

Aguzzi, G., Magnini, M., Pengo, M.F., Viroli, M., Montagna, S. (2025). A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND : Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-95841-0_1].

A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot

Aguzzi G.;Magnini M.;Viroli M.;
2025

Abstract

Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.
2025
Artificial Intelligence in Medicine. AIME 2025
1
5
Aguzzi, G., Magnini, M., Pengo, M.F., Viroli, M., Montagna, S. (2025). A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND : Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-95841-0_1].
Aguzzi, G.; Magnini, M.; Pengo, M. F.; Viroli, M.; Montagna, S.
File in questo prodotto:
File Dimensione Formato  
paper_46.pdf

embargo fino al 21/06/2026

Tipo: Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza: Licenza per accesso libero gratuito
Dimensione 355.55 kB
Formato Adobe PDF
355.55 kB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1026168
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact