AI based systems are becoming more and more pervasive, with the prospect – more or less futuristic – of being included in many support, care, and entertainment activities directly involving human beings. For this reason, research is focusing on making them accepted by end users, engendering trust and familiarity with these applications. The very expedients used to achieve this goal, however, can be generators of manipulative dynamics, that can expose individuals to danger, from a physical, economic, and – increasingly – psychological point of view. The analysis presented here aims to investigate this phenomenon, starting with an analysis of the features implemented in order to facilitate the interaction (Section 2), and presenting a number of concrete cases – to make it easier to fully understand the possible implications (Section 2.1). This is essential in order to lay the foundations from which to examine the recent AI Act, highlighting its merits and criticalities (Section 3). Finally, recommendations will be made regarding the lines to be followed at a legal and regulatory level (Section 4), in order to resolve the problematic aspects of the current approach and ensure a European development of new technologies, which prioritises the protection of users’ integrity and fundamental rights.
Rachele Carli (2022). Manipulation through AI systems. Pisa : Jean Monnet Centre of Excellence on the Regulation of Robotics and A.
Manipulation through AI systems
Rachele Carli
2022
Abstract
AI based systems are becoming more and more pervasive, with the prospect – more or less futuristic – of being included in many support, care, and entertainment activities directly involving human beings. For this reason, research is focusing on making them accepted by end users, engendering trust and familiarity with these applications. The very expedients used to achieve this goal, however, can be generators of manipulative dynamics, that can expose individuals to danger, from a physical, economic, and – increasingly – psychological point of view. The analysis presented here aims to investigate this phenomenon, starting with an analysis of the features implemented in order to facilitate the interaction (Section 2), and presenting a number of concrete cases – to make it easier to fully understand the possible implications (Section 2.1). This is essential in order to lay the foundations from which to examine the recent AI Act, highlighting its merits and criticalities (Section 3). Finally, recommendations will be made regarding the lines to be followed at a legal and regulatory level (Section 4), in order to resolve the problematic aspects of the current approach and ensure a European development of new technologies, which prioritises the protection of users’ integrity and fundamental rights.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.