CSS Colloquium: Rune Nyrup, CSS
The Trouble with 'Trustworthy AI' (and Why We Need It)
Info about event
Aud G1 (1532-116)
As societies grapple with emerging digital technologies, the term ‘Trustworthy AI’ has become central to many initiatives promising to help manage their impacts. This has prompted philosophers to debate whether the concept of trustworthiness ought to be applied to AI systems. Many say no, claiming that it implies a problematic form of anthropomorphism, while others say yes, pointing to the fact that talk of trust in artefacts is commonplace in everyday English. My sympathies tend to lie with the latter camp. However, although I reject the charge of anthropomorphism, there remains something troubling about 'Trustworthy AI'. Trustworthiness judgements, I argue, necessarily presuppose shared normative expectations regarding the role the trustee plays within a broader social practice. Thus, applying the concept of trustworthiness to emerging technologies risks implicitly settling questions regarding the structure and function of social practices that ought to be up for explicit public contention. Yet the solution is not to jettison the concept of trustworthiness, but rather to insist on its contentious nature and to actively contend its articulation within contemporary Trustworthy AI initiatives.
Coffee, tea, cake and fruit will be served before the colloquium @ 2 pm.