As deepfake information manipulation technology continues to evolve and propagate, its potential to mislead the public poses growing threat to societal trust. This paper outlines our research agenda exploring the role of explainable AI (XAI) in cyber defense and its effectiveness in safeguarding reality. Our study examines mechanisms for unveiling deepfakes in ways that enhance sensemaking and strengthen individual cyber defense self-efficacy in distinguishing authentic from manipulated information. To achieve this, we designed and simulated human-AI collaboration experiments with participantsfrom the United States and Italy in Spring 2025. These experiments will generate paired datasets of real and deepfake artifacts across audio, graphic, visual and textual content. XAI—defined by the completeness and relevance of explanations regarding deepfake information—will be modeled based on the insights from the collaboration. Ultimately, this study contributes to social cybersecurity by empowering individuals and communities to recognize and defend against deepfake information manipulation.

Ho, S.M., Prandini, M., Callegati, F., Chakraborty, S., Juzek, T.S., And Liu, Y. (2025). DEFENDING REALITY: HUMAN-AI COLLABORATION TO UNVEIL DEEPFAKE INFORMATION MANIPULATION.

DEFENDING REALITY: HUMAN-AI COLLABORATION TO UNVEIL DEEPFAKE INFORMATION MANIPULATION

Prandini, Marco;Callegati, Franco;
2025

Abstract

As deepfake information manipulation technology continues to evolve and propagate, its potential to mislead the public poses growing threat to societal trust. This paper outlines our research agenda exploring the role of explainable AI (XAI) in cyber defense and its effectiveness in safeguarding reality. Our study examines mechanisms for unveiling deepfakes in ways that enhance sensemaking and strengthen individual cyber defense self-efficacy in distinguishing authentic from manipulated information. To achieve this, we designed and simulated human-AI collaboration experiments with participantsfrom the United States and Italy in Spring 2025. These experiments will generate paired datasets of real and deepfake artifacts across audio, graphic, visual and textual content. XAI—defined by the completeness and relevance of explanations regarding deepfake information—will be modeled based on the insights from the collaboration. Ultimately, this study contributes to social cybersecurity by empowering individuals and communities to recognize and defend against deepfake information manipulation.
2025
SAIS 2025 Proceedings
1
6
Ho, S.M., Prandini, M., Callegati, F., Chakraborty, S., Juzek, T.S., And Liu, Y. (2025). DEFENDING REALITY: HUMAN-AI COLLABORATION TO UNVEIL DEEPFAKE INFORMATION MANIPULATION.
Ho, Shuyuan Mary; Prandini, Marco; Callegati, Franco; Chakraborty, Shayok; Juzek, Thomas Stephan; And Liu, Yue
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1027272
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact