How are public preferences relevant to the ethical use of AI? Theoretical considerations and empirical findings
Frederic Gerdon
Mannheim Centre for European Social Research (MZES) at University of Mannheim
Abstract
How can we ethically design and use systems that draw on “artificial intelligence” (AI)? There is a large scholarly and political debate about how to ensure that AI systems provide societal benefits and conform to ethical standards. The necessary ethical evaluations of AI systems entail a variety of components, including the assessment of objective risks a system poses e.g., to fundamental rights and critical infrastructures, which is a key element of the EU’s AI Act. One important, sometimes overlooked component of these evaluations is public preferences surrounding the use of AI-based technologies. Drawing on the concept of “social license” (as proposed by Gunningham), I argue that public perceptions are relevant from ethical and further practical perspectives. More precisely, I outline how public concerns surrounding the use of AI can relate to different components of the AI systems, including the kind of used data and the extent to which the process is automated. Furthermore, I draw on the theory of “contextual integrity”, originally developed in the field of privacy research, to argue that public perceptions of AI systems need to be measured for specific social contexts. I then discuss how researchers can measure public preferences sensitive to processes and contexts. I draw on empirical work that, e.g., uses survey experiments to show that individuals react sensitively to specific alterations in the design of AI systems, such as the context-relatedness of the data used. However, the results also suggest that there is skepticism towards fully automated decision-making (as one important AI-based technology) across several social contexts. I close by highlighting that public preferences are only one component of ethical evaluations, and public acceptance on its own does not constitute sufficient legitimization of high-risk AI systems.