Professor Dr. Carsten Gerner-Beuerle, Professor of Commercial Law, Faculty of Laws, University College London
The Network for Artificial Intelligence and Law (NAIL) invites you to its next event. We are delighted to welcome Professor Dr. Carsten Gerner-Beuerle, Faculty of Laws, University College London. He will talk about the difficulties of assessing the precise risks to health, safety, fundamental rights and the rule of law when regulating artificial intelligence. The lecture will be followed by a discussion around the topic. The event will be held in English.
After the lecture and discussion, we would like to invite you to end the evening with us in a relaxed atmosphere, with pretzels and wine in the south lounge.
You can participate in presence or online. Please register for the event using the following link: Registration
Institutions
Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Aimee van Wynsberghe, Rheinische Friedrich-Wilhelms-Universität Bonn, D
Institutions
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Professor Ignacio Cofone, McGill University
The Hamburg Network for Artificial Intelligence and Law (NAIL) invites you to its next event. We are pleased to welcome Professor Ignacio Cofone from McGill University in Montreal, Canada. He presents his latest book in which he demonstrates why our legal system is unable to adequately protect our privacy in the reality of new data-driven technologies such as AI. The lecture is followed by a subsequent discussion on the topic. The event will be held in English.
The event will take place in person at the University of Hamburg, Faculty of Law, Rothenbaumchaussee 33, Room A125. No registration is required for this.
There is also the option of participating online. Please sign up by emailing nail@ile-hamburg.de.
Institutions
Research Center for Computational Legal Studies and Legal Technology at Bucerius Law School
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg