With the launch of ChatGPT last year and the ensuing debate about the benefits and potential risks of generative AI, also the work on the European AI Act shifted into a higher gear. The European Council and Parliament, working on their respective compromise texts, had to find ways to accommodate this new phenomenon. The attempts to adapt the AI Act went hand in hand with a lively public debate on what was so new and different about generative AI, whether it raised new, not yet anticipated risks, and how to best address a technology whose societal implications are not yet well understood. Most importantly, was the AI Act outdated even before is adopted? In my presentation I would like to discuss the different approaches that the Council and Parliament adopted to governing Generative AI, the most salient points of discussion and the different approaches proposed to solve some of the key ethical and societal concerns around the rise of generative AI.
Prof. Dr. Natali Helberger (Universiteit van Amsterdam, NL)
Natali Helberger is Distinguished University Professor of Law and Digital Technology, with a special focus on AI, at the University of Amsterdam and a member of the Institute for Information Law (IViR). Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area Information, Communication, and the Data Society, which has played a leading role in shaping the international discussion on digital communication and platform governance. She is a founding member of the Human(e) AI research program and leads the Digital Transformation Initiative at the Faculty of Law. Since 2021, Helberger has also been director of the AI, Media & Democracy Lab, and since 2022, scientific director of the Algosoc (Public Values in the Algorithmic Society) Gravitation Consortium. A major focus of the Algosoc program is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression.
Institutions
bAIome Center for biomedical AI (UKE) and Bernhard Nocht Institute for Tropical Medicine (BNITM) will host the seminar series entitled “AI in biology and Medicine”. This series aims to capture a broad audience and promote cross institutional collaboration. Our expert speakers will give an overview and insight into particular AI/data science methods being developed in key areas of biology and medicine. We will have drinks and snacks following each seminar to facilitate exchange.
Fatemeh Hadaeghi,Institute of Computational Neuroscience, UKE
For further details and hybrid links, please go to the webpage AI in Biology & Medicine
The presentation series “Train your engineering network” (TYEN) on diverse topics of Machine Learning addresses all interested persons at TUHH, from MLE partners as well as from the Hamburg region in general and aims at promoting the exchange of information and knowledge between these persons as well as their networking in a relaxed atmosphere. Thereby, the machine learning activities within MLE, TUHH and in the wider environment shall be made more visible, cooperations shall be promoted and also interested students shall be given an insight.
Organisers:
The series will be continued in the summer semester 2025. If you are interested in presenting, please contact the organizers.
The lectures will be in English.
Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
Institutions
Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Institutions
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Louise Amoore, Durham University, Durham, UK
Institutions
Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Mathias Risse, John F. Kennedy School of Government, Harvard University, Cambridge, MA, USA
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Andra Siibak, University of Tartu, Tartu, Estland
Present day children’s futures are decided by algorithms predicting their probability of success at school, their suitability for a job position, their likely recidivism or mental health problems. Advances in predictive analytics, artificial intelligence (AI) systems, behavioral-, and biometrics technologies, have started to be aggressively used for monitoring, aggregating, and analyzing children’s data. Such dataveillance happening both in homes, schools, and peer networks has a profound impact not only on children’s preferences, social relations, life chances, rights and privacy but also the "future of human agency - and ultimately, of society and culture" (Mascheroni & Siibak 2021: 169).
Building upon the findings of my different empirical case studies, I will showcase how the popular digital parenting practices and the growing datafication happening in the education sector, could create not only hypothetical data scares but also lead to real data scars in the lives of the young.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Vincent C. Müller is AvH Professor for Philosophy and Ethics of AI and Director of the Centre for Philosophy and AI Research (PAIR) at FAU Erlangen-Nuremberg
It is now frequently observed that there is no proper scope and no proper method in the discipline of AI-ethics. This has become an issue in the development towards maturity of the discipline, e.g. canonical problems, positions, arguments … secure steps forward. We propose a minimal, yet universal view of the field (again Müller 2020). Given this proposal, we will know the scope and the method, and we can appreciate the wide set of contributions.
Institutions
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Aimee van Wynsberghe, Rheinische Friedrich-Wilhelms-Universität Bonn, D
Institutions
Institutions
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Explore the transformative potential of the Population Dynamics Foundation Model (PDFM), a cutting-edge AI model designed to capture complex, multidimensional interactions among human behaviors, environmental factors, and local contexts. This workshop provides an in-depth introduction to PDFM Embeddings and their applications in geospatial analysis, public health, and socioeconomic modeling.
Participants will gain hands-on experience with PDFM Embeddings to perform advanced geospatial predictions and analyses while ensuring privacy through the use of aggregated data. Key components of the workshop include:
By the end of this workshop, participants will have a strong foundation in utilizing PDFM Embeddings to address real-world geospatial challenges.
Institution
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg