 
                            Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
 
                            Institutions
 
                            Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
 
                            Institutions
 
                            About the lecture
tbd
About the speaker
Jocelyn Maclure is Full Professor of Philosophy and Jarislwosky Chair in Human Nature and Technology at McGill University. His current work addresses various topics in the philosophy of artificial intelligence and in social epistemology. In 2023, he was Mercator Visiting Professor for AI in the Human Context at the University of Bonn. His recent articles appeared in journals such as Minds & Machines, AI & Ethics, AI & Society and Digital Society. He was the president of the Quebec Ethics in Science and Technology Commission—and advisory body of the Quebec Government—from 2017 to 2024. Before turning his attention to the philosophy of AI, he published extensively in moral and political philosophy, including, with Charles Taylor, Secularism and Freedom of Conscience (Harvard University Press (2011). He was elected to the Royal Society of Canada in 2023.
The talk explores the question of whether Artificial Intelligence (AI) can truly create art, or if there is an essential “human factor” in art production. Against the background of AI’s growing capabilities, traditional concepts in art theory like authorship are reconsidered. It is argued that authorship is a necessary condition for art, while aesthetic responsibility is at least a necessary condition for authorship of artworks. Although AI can function as an aesthetic agent, it cannot bear aesthetic responsibility. Therefore, it can neither on its own nor in cooperation with humans be the author of artworks. However, AI is able to produce objects that are in their manifest properties indistinguishable from works of art, I will speak of “fake art.” It will be shown to what extent the massive occurrence of AI-generated fake art has a detrimental effect on art practice.Institutions
 
                            Institutions
 
                            Institutions
 
                            Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
 
                            Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
 
                            Prof. Dr. José van Dijck (Utrecht University, NL)
The growing dominance of two global platform ecosystems has left European countries to rely on American and Chinese digitale infrastructures. This dependency is not just affecting markets and labor relations, but is also transforming social practices, and affecting democracies. While two large ecosystems fight for information control in the global online world, the European perspective on digital infrastructures is focused on regulation rather than on building alternatives. With emerging technologies such as generative AI (ChatGPT, Bard) and geopolitical changes, the infrastructural perspective becomes more poignant. How can Europe achieve sovereignty in the digital world?
This lecture takes up two questions. First, what public values are fundamental to Europe’s platform societies? Values such as privacy, security, transparency, equality, public trust, and (institutional, professional) autonomy are important principles upon which the design of platform architectures should be based. Second, what are the responsibilities of companies, governments, and citizens in building an alternative, sustainable platform ecosystem based on those public values?
Institutions
 
                            Institutions
 
                            Institutions
 
                            Institutions
 
                            Prof. Dr. Sven Ove Hansson (Uppsala University, SE)
tbd
Institutions 
                            Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
 
                            Prof. Dr. Darian Meacham (Maastricht University, NL)
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg