Aarhus University Seal

Molly H. Powell and Torben Esbo Agergaard join the Centre for Science Studies

They will both be working on the project Towards Responsible Explainable AI Technologies (TREAT), headed by Rune Nyrup

The Centre for Science Studies (CSS) is happy to announce the recent employment of Molly H. Powell, who has recently completed a PhD in Political Theory at the University of Manchester, and Torben E. Agergaard, M.Sc. in Science Studies, from CSS. Molly is appointed for a three-year postdoc position, while Torben will serve as a PhD student at the centre.

Molly and Torben will be working on the project Towards Responsible Explainable AI Technologies (TREAT), commissioned by Rune Nyrup. The project focuses on so-called “Explainable AI” (XAI) which are methods and tools that enable human understanding of otherwise complex AI-systems. The project is guided by the assertion that although XAI might be successful in resolving some of the problems of AI related to transparency and accountability, new ethical issues may arise along with the rapid evolution of XAI-approaches. For instance, troubling uses of XAI may draw users into a false sense of understanding which may in turn lead to overtrust in AI-systems or leave users vulnerable to manipulation or outright deception.

The goal of the TREAT-project is therefore to derive fresh ethical perspectives and frameworks on XAI from philosophical and political theories along with empirical case-studies to guide future research, applications, and policy making within the field. To this end, Molly and Torben will contribute to different work packages. Molly, who is a scholar in political theory from Manchester University, will provide insights to a work package of explanatory legitimacy, focused on XAI in contexts where the recipients and/or providers of explanations have diverging explanatory interests and values. Torben, who has previously worked on different CSS projects related to public understand of and engagement with contemporary issues of science and technology, will focus his research on a work package of explanatory honesty, which seeks to explore the boundaries between epistemically helpful and deceptive applications of XAI.