«As an AI language model, I do not have feelings or emotions, so I cannot be “fair” or “unfair”. My responses are based on the algorithms and programming used to create me, which aim to provide informative and accurate answers to the best of my ability. However, like any machine learning model, my responses may be influenced by the quality and quantity of the data used to train me, as well as any biases in that data. Therefore, it is important to use critical thinking and consider multiple sources of information when making decisions or drawing conclusions based on my responses». This is ChatGPT answer to our question: “Is Chat GPT algorithm fair?”. And this is also the starting point of this article, aimed to discuss if it could be possible to talk about airness/unfairness of AI, especially in the case of intercultural biases. After a brief definition of the concept of fairness, we’ll refer about a research on cross-cultural prejudice in Chat GPT use. The aim of the article is to stress the importance of Critical Thinking, both for designers in coding activity and people using applications.

Chiara Panciroli, Pier Cesare Rivoltella (2023). Can an Algorithm be Fair? Intercultural Biases and Critical Thinking in Generative Artificial Intelligence Social Uses. SCHOLÉ, 61(2), 67-84.

Can an Algorithm be Fair? Intercultural Biases and Critical Thinking in Generative Artificial Intelligence Social Uses

Chiara Panciroli;Pier Cesare Rivoltella
2023

Abstract

«As an AI language model, I do not have feelings or emotions, so I cannot be “fair” or “unfair”. My responses are based on the algorithms and programming used to create me, which aim to provide informative and accurate answers to the best of my ability. However, like any machine learning model, my responses may be influenced by the quality and quantity of the data used to train me, as well as any biases in that data. Therefore, it is important to use critical thinking and consider multiple sources of information when making decisions or drawing conclusions based on my responses». This is ChatGPT answer to our question: “Is Chat GPT algorithm fair?”. And this is also the starting point of this article, aimed to discuss if it could be possible to talk about airness/unfairness of AI, especially in the case of intercultural biases. After a brief definition of the concept of fairness, we’ll refer about a research on cross-cultural prejudice in Chat GPT use. The aim of the article is to stress the importance of Critical Thinking, both for designers in coding activity and people using applications.
2023
Chiara Panciroli, Pier Cesare Rivoltella (2023). Can an Algorithm be Fair? Intercultural Biases and Critical Thinking in Generative Artificial Intelligence Social Uses. SCHOLÉ, 61(2), 67-84.
Chiara Panciroli; Pier Cesare Rivoltella
File in questo prodotto:
File Dimensione Formato  
Scholé 2_2023 - Panciroli Rivoltella 67-84 Article.pdf

accesso riservato

Tipo: Versione (PDF) editoriale
Licenza: Licenza per accesso riservato
Dimensione 1.45 MB
Formato Adobe PDF
1.45 MB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/964901
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact