Session 06 – Part 1: Marc Hauer (Germany)

My AI discriminates? How could this happen and who is to blame?

For some years now, artificial intelligence methods have been used in many areas of daily life. Many applications have been criticized for being discriminatory. There are several ways to deal with such cases: Training datasets can be improved to reduce discriminatory behavior, discriminatory model outputs can be modified post hoc, processes can be established to make discriminatory results usable. In any case, the preliminary assumption is that discrimination can be measured.

The development process of an AI consists of several steps, collection and processing of data is only one of them. Errors can occur at all steps, which have an effect on later steps. This means control processes are needed at all steps and the transitions between them. In addition, responsibilities must be assigned at all these points so that it is clear who must react to errors and problems. At this point, the concept of the „Long Chain of Responsibilities“ is introduced, which helps to clarify these responsibilities.

In this talk, we will talk about how discrimination can enter AI, who is responsible for it, and how discrimination can be operationalized.

Marc Hauer is a PhD candidate on the question of how to make software development processes and AI products accountable.

Additionally, he works as media education consultant for the Landesmedienzentrum Baden-Württemberg in the education of students, parents and teachers on the topic of computer science and society and for TrustedAI GmbH in AI consulting for companies.

Recordings and Slides

To get access to the recordings and slides of the session, please enter the password we sent you in the confirmation email for your registration.

If you have any questions or problems with entering the password, please contact us via prodabi@mail.uni-paderborn.de.

Nach oben scrollen