Artificial Intelligence (AI) decision support systems are being used more widely across industries and sectors, including hiring, admissions, loans, healthcare, and crime prediction. However, with rising societal inequalities and discrimination, it’s crucial to ensure that AI doesn’t reiterate these issues. AI systems offer an opportunity to improve processes and repair injustices by identifying and addressing biases. Building trust in these systems requires understanding bias in AI and determining practical and ethically justified ways to mitigate it, which remains an ongoing challenge despite increased efforts in recent years. In this context, the first edition of the AEQUITAS served as a workshop for the discussion of ideas, presentation of research findings, and sharing of preliminary work encompassing various facets of fairness and bias in AI. This preface sets the stage for the insightful invited talks and papers that emerged from the AEQUITAS 2023 workshop, co-located with the ECAI conference. It introduces the topics and the discussions that arose during the workshop, providing insight into the complex and multifaceted realm of fairness and bias in AI. The 1st edition of the conference featured 12 presentations of high-quality papers. Accepted contributions ranged from foundational and theoretical results to practical experiences, case studies, and applications, and they covered a wide range of topics in the scope of technical, social and legal aspects of fairness and bias in AI.

Roberta Calegari, A.A.T. (2023). Proceedings of the 1st Workshop on Fairness and Bias in AI, AEQUITAS 2023 co-located with 26th European Conference on Artificial Intelligence (ECAI 2023). Aachen : CEUR-WS.

Proceedings of the 1st Workshop on Fairness and Bias in AI, AEQUITAS 2023 co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)

Roberta Calegari;Michela Milano
2023

Abstract

Artificial Intelligence (AI) decision support systems are being used more widely across industries and sectors, including hiring, admissions, loans, healthcare, and crime prediction. However, with rising societal inequalities and discrimination, it’s crucial to ensure that AI doesn’t reiterate these issues. AI systems offer an opportunity to improve processes and repair injustices by identifying and addressing biases. Building trust in these systems requires understanding bias in AI and determining practical and ethically justified ways to mitigate it, which remains an ongoing challenge despite increased efforts in recent years. In this context, the first edition of the AEQUITAS served as a workshop for the discussion of ideas, presentation of research findings, and sharing of preliminary work encompassing various facets of fairness and bias in AI. This preface sets the stage for the insightful invited talks and papers that emerged from the AEQUITAS 2023 workshop, co-located with the ECAI conference. It introduces the topics and the discussions that arose during the workshop, providing insight into the complex and multifaceted realm of fairness and bias in AI. The 1st edition of the conference featured 12 presentations of high-quality papers. Accepted contributions ranged from foundational and theoretical results to practical experiences, case studies, and applications, and they covered a wide range of topics in the scope of technical, social and legal aspects of fairness and bias in AI.
2023
221
Roberta Calegari, A.A.T. (2023). Proceedings of the 1st Workshop on Fairness and Bias in AI, AEQUITAS 2023 co-located with 26th European Conference on Artificial Intelligence (ECAI 2023). Aachen : CEUR-WS.
Roberta Calegari, Andrea Aler Tubella, Gabriel González Castañe, Virginia Dignum, Michela Milano
File in questo prodotto:
File Dimensione Formato  
xpreface.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 219.93 kB
Formato Adobe PDF
219.93 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/962621
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact