As AI is deployed in healthcare contexts, medical professionals undergo technology-driven challenges, such as maintaining control over the diagnostic process and renegotiating their tasks and areas of expertise. In this article, we explore the social and professional implications of AI in healthcare contexts in Italy. We do this by investigating the multiple factors that co-construct trust in AI systems. We also examine the various forms of boundary work that professionals use to redefine their authority and professional autonomy. We employ a mixed-methods research design, including a survey (n = 193) and 22 in-depth interviews with clinicians, addressing clinicians’ AI awareness and knowledge, use of AI in medical practice, trust relations and concerns regarding medical professionalism. Our findings suggest that different assemblages of trustworthiness collated into three trusting attitudes (relational-practical, institutional-regulatory and epistemic-infrastructural), showing how trust in medical tools is being configured in the AI age. Clinicians reported performing three strategies of boundary work (defensive, regulatory and transformative) in negotiating their roles and expertise. This boundary work was narrated as a response to working contexts in which AI’s influence led to contested workflows, altered decision-making authority and redefined professional boundaries.
Sartori, L., Musmeci, M., Cannizzaro, S., Binelli, C. (2026). When the white coat meets the code: medical professionals’ negotiating with artificial intelligence, trust and boundary work. HEALTH RISK & SOCIETY, 28, 1-21 [10.1080/13698575.2026.2662006].
When the white coat meets the code: medical professionals’ negotiating with artificial intelligence, trust and boundary work
Sartori, Laura
Primo
;Musmeci, MariannaSecondo
;Cannizzaro, Sara;Binelli, Chiara
2026
Abstract
As AI is deployed in healthcare contexts, medical professionals undergo technology-driven challenges, such as maintaining control over the diagnostic process and renegotiating their tasks and areas of expertise. In this article, we explore the social and professional implications of AI in healthcare contexts in Italy. We do this by investigating the multiple factors that co-construct trust in AI systems. We also examine the various forms of boundary work that professionals use to redefine their authority and professional autonomy. We employ a mixed-methods research design, including a survey (n = 193) and 22 in-depth interviews with clinicians, addressing clinicians’ AI awareness and knowledge, use of AI in medical practice, trust relations and concerns regarding medical professionalism. Our findings suggest that different assemblages of trustworthiness collated into three trusting attitudes (relational-practical, institutional-regulatory and epistemic-infrastructural), showing how trust in medical tools is being configured in the AI age. Clinicians reported performing three strategies of boundary work (defensive, regulatory and transformative) in negotiating their roles and expertise. This boundary work was narrated as a response to working contexts in which AI’s influence led to contested workflows, altered decision-making authority and redefined professional boundaries.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



