As the number of job applications increase, hiring managers have turned to artificial intelligence (AI) to help them make decisions faster, and perhaps better. Where once each manager did their own first rough cut of files, now third party software algorithms sort applications for many firms. Human mistakes are inevitable, but fortunately heterogenous. Not so with machine decision-making. Relying on the same AI systems means that each firm makes the same mistakes and suffering from the same biases. When the same person re-encounters the same model again and again, or models trained on the same dataset, she might be wrongly rejected again and again. In this talk, I will argue that it is wrong to allow the quirks of an algorithmic system to consistently exclude a small number of people from consequential opportunities, and I will suggest solutions that can help ameliorate the harm to individuals.
Prof. Dr. Kathleen A. Creel (Northeastern University, Boston, MA, USA)
Kathleen Creel is an Assistant Professor at Northeastern University, cross appointed between the Department of Philosophy and Religion and Khoury College of Computer Sciences. Her research explores the moral, political, and epistemic implications of machine learning as it is used in non-state automated decision making and in science. She co-leads Northeastern’s AI and Data Ethics Training Program and is a winner of the International Association of Computing and Philosophy’s 2023 Herbert Simon Award.
Institutions
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups.
To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring. But is Europe ready for this task?
In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how despite best attempts they fail to protect us against the novel risks posed by AI. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.
I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.
Prof. Dr. Sandra Wachter (Oxford Internet Institute, University of Oxford, GB)
Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. [more]
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg