EVENTS

Our events in the areas of Big Data and Research Innovation include a diverse set of topics such as Future, Strategy, Technology, Applications, and Management.

If you feel that your event or event series should be part of this event calendar, just contact us!

images/02_events/TM%20Vorlessung%20Alle.jpg#joomlaImage://local-images/02_events/TM Vorlessung Alle.jpg?width=800&height=300
Tuesday, January 27th, 2024 | 18:15 - 19:45 p.m.

Public Lecture Series: Taming the Machines. A Fallibilist Approach to AI Value Alignment

UHH, Main Building, West Wing, Edmund-Siemers-Allee 1, Room 221
Artificial Intelligence (AI) technologies have become central to numerous aspects of our lives, and are significantly reshaping them. These include our homes, our workplaces, industries in general, schools and academia, but also government, law enforcement and warfare. While AI technologies present many opportunities, they have also been shown to reinforce existing injustices, to threaten human rights, and to exacerbate the climate crisis. This begs the question: How can we collectively and meaningfully shape the digital society we live in, and who is to decide on the agenda? 
This lecture series invites viewpoints from different relevant disciplines to explore how we can preserve and advance human values through the development and use of AI technologies. Key questions include: How does AI impact our fundamental social, political, and economic structures? What does it mean to lead a meaningful life in the AI age? What design and regulatory decisions should we make to ensure digital transformations are fair and sustainable?  
To explore these and other related questions, this public lecture series invites distinguished international researchers to present and discuss their work. To get the latest updates and details how to attend the lectures, please visit http://uhh.de/inf-eit.
 

Prof. Dr. Ibo van de Poel, Delft University of Technology, NL

Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.

Institutions

  • UHH

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein.