With the launch of ChatGPT last year and the ensuing debate about the benefits and potential risks of generative AI, also the work on the European AI Act shifted into a higher gear. The European Council and Parliament, working on their respective compromise texts, had to find ways to accommodate this new phenomenon. The attempts to adapt the AI Act went hand in hand with a lively public debate on what was so new and different about generative AI, whether it raised new, not yet anticipated risks, and how to best address a technology whose societal implications are not yet well understood. Most importantly, was the AI Act outdated even before is adopted? In my presentation I would like to discuss the different approaches that the Council and Parliament adopted to governing Generative AI, the most salient points of discussion and the different approaches proposed to solve some of the key ethical and societal concerns around the rise of generative AI.
Prof. Dr. Natali Helberger (Universiteit van Amsterdam, NL)
Natali Helberger is Distinguished University Professor of Law and Digital Technology, with a special focus on AI, at the University of Amsterdam and a member of the Institute for Information Law (IViR). Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area Information, Communication, and the Data Society, which has played a leading role in shaping the international discussion on digital communication and platform governance. She is a founding member of the Human(e) AI research program and leads the Digital Transformation Initiative at the Faculty of Law. Since 2021, Helberger has also been director of the AI, Media & Democracy Lab, and since 2022, scientific director of the Algosoc (Public Values in the Algorithmic Society) Gravitation Consortium. A major focus of the Algosoc program is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression.
Institutions
bAIome Center for biomedical AI (UKE) and Bernhard Nocht Institute for Tropical Medicine (BNITM) will host the seminar series entitled “AI in biology and Medicine”. This series aims to capture a broad audience and promote cross institutional collaboration. Our expert speakers will give an overview and insight into particular AI/data science methods being developed in key areas of biology and medicine. We will have drinks and snacks following each seminar to facilitate exchange.
Fatemeh Hadaeghi,Institute of Computational Neuroscience, UKE
For further details and hybrid links, please go to the webpage AI in Biology & Medicine
The presentation series “Train your engineering network” (TYEN) on diverse topics of Machine Learning addresses all interested persons at TUHH, from MLE partners as well as from the Hamburg region in general and aims at promoting the exchange of information and knowledge between these persons as well as their networking in a relaxed atmosphere. Thereby, the machine learning activities within MLE, TUHH and in the wider environment shall be made more visible, cooperations shall be promoted and also interested students shall be given an insight.
Organisers:
The series will be continued in the summer semester 2025. If you are interested in presenting, please contact the organizers.
The lectures will be in English.
Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Louise Amoore, Durham University, Durham, UK
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Mathias Risse, John F. Kennedy School of Government, Harvard University, Cambridge, MA, USA
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Andra Siibak, University of Tartu, Tartu, Estland
Present day children’s futures are decided by algorithms predicting their probability of success at school, their suitability for a job position, their likely recidivism or mental health problems. Advances in predictive analytics, artificial intelligence (AI) systems, behavioral-, and biometrics technologies, have started to be aggressively used for monitoring, aggregating, and analyzing children’s data. Such dataveillance happening both in homes, schools, and peer networks has a profound impact not only on children’s preferences, social relations, life chances, rights and privacy but also the "future of human agency - and ultimately, of society and culture" (Mascheroni & Siibak 2021: 169).
Building upon the findings of my different empirical case studies, I will showcase how the popular digital parenting practices and the growing datafication happening in the education sector, could create not only hypothetical data scares but also lead to real data scars in the lives of the young.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Vincent C. Müller is AvH Professor for Philosophy and Ethics of AI and Director of the Centre for Philosophy and AI Research (PAIR) at FAU Erlangen-Nuremberg
It is now frequently observed that there is no proper scope and no proper method in the discipline of AI-ethics. This has become an issue in the development towards maturity of the discipline, e.g. canonical problems, positions, arguments … secure steps forward. We propose a minimal, yet universal view of the field (again Müller 2020). Given this proposal, we will know the scope and the method, and we can appreciate the wide set of contributions.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Aimee van Wynsberghe, Rheinische Friedrich-Wilhelms-Universität Bonn, D
Institutions
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Explore the transformative potential of the Population Dynamics Foundation Model (PDFM), a cutting-edge AI model designed to capture complex, multidimensional interactions among human behaviors, environmental factors, and local contexts. This workshop provides an in-depth introduction to PDFM Embeddings and their applications in geospatial analysis, public health, and socioeconomic modeling.
Participants will gain hands-on experience with PDFM Embeddings to perform advanced geospatial predictions and analyses while ensuring privacy through the use of aggregated data. Key components of the workshop include:
By the end of this workshop, participants will have a strong foundation in utilizing PDFM Embeddings to address real-world geospatial challenges.
Institution
Gastvortrag von Lin Jia, Senior Data Scientists bei Booking.com. Sie wird an unserem Seminar über kausales maschinelles Lernen am 8. Juli 2024 teilnehmen und über vergangene und aktuelle Projekte zu kausaler Inferenz und kausalem maschinellem Lernen bei Booking.com referieren.
Booking.com ist eine der weltweit führenden digitalen Reise-Plattformen und verfügt über ein starkes Team von Datenwissenschaftlern, die Experten sind in der Anwendung und Entwicklung von Methoden für kausale Analysen und maschinelles Lernen in der Industrie.
Über die Referentin: Lin Jia ist eine leitende Datenwissenschaftlerin bei Booking.com. Sie ist spezialisiert auf die Verwendung von kausalen Beobachtungsansätzen zur Bewertung der Auswirkungen von Produktänderungen und leitet die Initiative zur Durchführung robuster und transparenter Kausalanalysen bei Booking.com. Sie wird ihre Erfahrungen aus verschiedenen Projekten zur kausalen Inferenz bei Booking.com teilen.
Der Vortrag ist offen für alle interessierten Forscher:innen und Studierenden, die etwas über die Aktivitäten der Industrie in der kausalen Datenwissenschaft erfahren möchten.
Institutions
Gerhard Wellein: Application Knowledge Required: Performance Modeling for Fund and Profit & Axel Klawonn: What can machine learning be used for in domain decomposition methods?
Gerhard Wellein is a Professor for High Performance Computing at the Department for Computer Science of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and holds a PhD in theoretical physics from the University of Bayreuth. He is a member of the board of directors of the German NHR-Alliance which coordinates the national HPC Tier-2 infrastructures at German universities. As a member of the scientific steering committees of the Leibniz Supercomputing Centre (LRZ) and the Gauss-Centre for Supercomputing (GCS) he is organizing and surveying the compute time application process for national HPC resources. Gerhard Wellein has more than twenty years of experience in teaching HPC techniques to students and scientists from computational science and engineering, is an external trainer in the Partnership for Advanced Computing in Europe (PRACE) and received the “2011 Informatics Europe Curriculum Best Practices Award” (together with Jan Treibig and Georg Hager) for outstanding teaching contributions. His research interests focus on performance modelling and performance engineering, architecture-specific code optimization, novel parallelization approaches and hardware-efficient building blocks for sparse linear algebra and stencil solvers.
Prof. Dr. Axel Klawonn heads the research group on numerical mathematics and scientific computing at the Universität zu Köln. The group works on the development of efficient numerical methods for the simulation of problems from computational science and engineering. This comprises the development of efficient algorithms, their theoretical analysis, and the implementation on large parallel computers with up to several hundreds of thousands of cores. A special focus in the applications is currently on problems from biomechanics/medicine, structural mechanics, and material science. The research is in the field of numerical methods for partial differential equations and high performance parallel scientific computing, including machine learning.
A multitude of ML tasks in particle physics, from unfolding detector effects to refining simulation and extrapolating background estimations, require mapping one arbitrary distribution to another. Several indirect methods have been developed to achieve this, such as classifier-based reweighting on a distribution level, or conditional generative models. However, training an ML model to perform a direct, deterministic mapping has long been a challenging prospect.
In this talk, I introduce the concept of Schrödinger Bridges, ML architecture closely related to Diffusion Models, which enables direct mapping of arbitrary distribution to arbitrary distribution. I demonstrate two implementation approaches with differing upsides and present state-of-the-art results applying Schrödinger Bridges to unfolding and refinement tasks.
Institutions
bAIome Center for biomedical AI (UKE) and Bernhard Nocht Institute for Tropical Medicine (BNITM) will host the seminar series entitled “AI in biology and Medicine”. This series aims to capture a broad audience and promote cross institutional collaboration. Our expert speakers will give an overview and insight into particular AI/data science methods being developed in key areas of biology and medicine. We will have drinks and snacks following each seminar to facilitate exchange.
Angela Relógio, Medical School Hamburg MSH
For further details and hybrid links, please go to the webpage AI in Biology & Medicine
bAIome Center for biomedical AI (UKE) and Bernhard Nocht Institute for Tropical Medicine (BNITM) will host the seminar series entitled “AI in biology and Medicine”. This series aims to capture a broad audience and promote cross institutional collaboration. Our expert speakers will give an overview and insight into particular AI/data science methods being developed in key areas of biology and medicine. We will have drinks and snacks following each seminar to facilitate exchange.
Christopher Gundler, Institute for Applied Medical Informatics, UKE
For further details and hybrid links, please go to the webpage AI in Biology & Medicine
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg