«As an AI language model, I do not have feelings or emotions, so I cannot be “fair” or “unfair”. My responses are based on the algorithms and programming used to create me, which aim to provide informative and accurate answers to the best of my ability. However, like any machine learning model, my responses may be influenced by the quality and quantity of the data used to train me, as well as any biases in that data. Therefore, it is important to use critical thinking and consider multiple sources of information when making decisions or drawing conclusions based on my responses». This is ChatGPT answer to our question: “Is Chat GPT algorithm fair?”. And this is also the starting point of this article, aimed to discuss if it could be possible to talk about fairness/unfairness of AI, especially in the case of intercultural biases. After a brief definition of the concept of fairness, we’ll refer about a research on cross-cultural prejudice in Chat GPT use. The aim of the article is to stress the importance of Critical Thinking, both for designers in coding activity and people using applications.

Can an Algorithm be Fair? Intercultural Biases and Critical Thinking in Generative Artificial Intelligence Social Uses / Chiara Panciroli; Pier Cesare Rivoltella. - In: SCHOLÉ. - ISSN 2611-9978. - STAMPA. - 2:(2023), pp. 67-84.

Can an Algorithm be Fair? Intercultural Biases and Critical Thinking in Generative Artificial Intelligence Social Uses

Chiara Panciroli;Pier Cesare Rivoltella
2023

Abstract

«As an AI language model, I do not have feelings or emotions, so I cannot be “fair” or “unfair”. My responses are based on the algorithms and programming used to create me, which aim to provide informative and accurate answers to the best of my ability. However, like any machine learning model, my responses may be influenced by the quality and quantity of the data used to train me, as well as any biases in that data. Therefore, it is important to use critical thinking and consider multiple sources of information when making decisions or drawing conclusions based on my responses». This is ChatGPT answer to our question: “Is Chat GPT algorithm fair?”. And this is also the starting point of this article, aimed to discuss if it could be possible to talk about fairness/unfairness of AI, especially in the case of intercultural biases. After a brief definition of the concept of fairness, we’ll refer about a research on cross-cultural prejudice in Chat GPT use. The aim of the article is to stress the importance of Critical Thinking, both for designers in coding activity and people using applications.
2023
Can an Algorithm be Fair? Intercultural Biases and Critical Thinking in Generative Artificial Intelligence Social Uses / Chiara Panciroli; Pier Cesare Rivoltella. - In: SCHOLÉ. - ISSN 2611-9978. - STAMPA. - 2:(2023), pp. 67-84.
Chiara Panciroli; Pier Cesare Rivoltella
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/964901
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact