Tuesday, December 5th, 2023 | 18:15 p.m.
hybrid: on-site at ESA 1, W 221 or via webinar access
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups.
To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring. But is Europe ready for this task?
In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how despite best attempts they fail to protect us against the novel risks posed by AI. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.
I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.
Prof. Dr. Sandra Wachter (Oxford Internet Institute, University of Oxford, GB)
Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. [more]
Institutions