Participatory design (PD) and co-creation (Co-C) approaches to building Artificial Intelligence (AI) systems have become increasingly popular exercises for ensuring greater social inclusion and fairness in technological transformation by accounting for the experiences of vulnerable or disadvantaged social groups; however, such design work is challenging in practice, partly because of the inaccessible domain of technical expertise inherent to AI design. This paper evaluates a methodological approach to make addressing AI bias more accessible by incorporating a training component on AI bias in a Co-C process with vulnerable and marginalized participant groups. This was applied by socio-technical researchers involved in creating an AI bias mitigation developer toolkit. This paper’s analysis emphasizes that critical reflection on how to use training in Co-C appropriately and how such training should be designed and implemented is necessary to ensure training allows for a genuinely more inclusive approach to AI systems design when those most at risk of being adversely affected by AI technologies are often not the intended end-users of said technologies. This is acutely relevant as Co-C exercises are increasingly used to demonstrate regulatory compliance and ethical practice by powerful institutions and actors developing AI systems, particularly in the ethical and regulatory environment coalescing around the European Union’s recent AI Act.

Slesinger, I., Yalaz, E., Rizou, S., Gibin, M., Krasanakis, E., Papadopoulos, S. (2024). Training in Co-Creation as a Methodological Approach to Improve AI Fairness. SOCIETIES, 14(12), 1-19 [10.3390/soc14120259].

Training in Co-Creation as a Methodological Approach to Improve AI Fairness

Gibin, Marta;
2024

Abstract

Participatory design (PD) and co-creation (Co-C) approaches to building Artificial Intelligence (AI) systems have become increasingly popular exercises for ensuring greater social inclusion and fairness in technological transformation by accounting for the experiences of vulnerable or disadvantaged social groups; however, such design work is challenging in practice, partly because of the inaccessible domain of technical expertise inherent to AI design. This paper evaluates a methodological approach to make addressing AI bias more accessible by incorporating a training component on AI bias in a Co-C process with vulnerable and marginalized participant groups. This was applied by socio-technical researchers involved in creating an AI bias mitigation developer toolkit. This paper’s analysis emphasizes that critical reflection on how to use training in Co-C appropriately and how such training should be designed and implemented is necessary to ensure training allows for a genuinely more inclusive approach to AI systems design when those most at risk of being adversely affected by AI technologies are often not the intended end-users of said technologies. This is acutely relevant as Co-C exercises are increasingly used to demonstrate regulatory compliance and ethical practice by powerful institutions and actors developing AI systems, particularly in the ethical and regulatory environment coalescing around the European Union’s recent AI Act.
2024
Slesinger, I., Yalaz, E., Rizou, S., Gibin, M., Krasanakis, E., Papadopoulos, S. (2024). Training in Co-Creation as a Methodological Approach to Improve AI Fairness. SOCIETIES, 14(12), 1-19 [10.3390/soc14120259].
Slesinger, Ian; Yalaz, Evren; Rizou, Stavroula; Gibin, Marta; Krasanakis, Emmanouil; Papadopoulos, Symeon
File in questo prodotto:
File Dimensione Formato  
societies-14-00259 (1).pdf

accesso aperto

Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 973.17 kB
Formato Adobe PDF
973.17 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1000635
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact