 
                            Environmental degradation remains one of the most pressing global challenges, threatening ecosystems, economies, and human well-being worldwide. To address this crisis, the International Telecommunication Union (ITU), in collaboration with its partners, is proud to announce the 2025 AI for Climate Action Innovation Factory. This initiative aims to harness the power of Artificial Intelligence (AI) to develop innovative solutions that help mitigate environmental impacts and support global adaptation efforts.
The 2025 edition aims to further advance the use of AI in addressing pressing environmental and sustainability challenges, promoting scalable, impactful, and inclusive AI-driven projects that can contribute to meaningful solutions aligned with global priorities and the targets of the Paris Agreement. The finale will take place at COP30 in Brazil, aligning with the conference’s mission to accelerate global environmental action.
Key objectives:
Institutions
 
                            Willkommen zur ITMC-Conference 2025
In diesem Jahr steht das Thema „Resiliente IT - Resilienz als Schlüssel für ein krisenfestes und zukunftsfähiges Unternehmen im Spannungsfeld von politischen Entwicklungen, Fachkräftemangel, Klimakrise, Mangel an Diversität und wirtschaftlichen Herausforderungen“ im Zentrum der ITMC-Conference.
Die ITMC-Conference ist die ideale Gelegenheit für Studieninteressierte, die sich für den Masterstudiengang IT-Management und -Consulting (ITMC) an der Universität Hamburg begeistern. Am 2. Juni 2025 laden wir Dich in den Lichthof der Staats- und Universitätsbibliothek ein, um einen authentischen Einblick in die Inhalte, Projekte und das Umfeld des Studiengangs zu gewinnen.
Erlebe spannende Keynotes und Vorträge von Expert:innen aus Wissenschaft und Praxis, informiere Dich über aktuelle Trends im IT-Management und Consulting und tausche Dich direkt mit Studierenden, Alumni und Lehrenden aus. Die Konferenz wird von Studierenden des ITMC-Studiengangs organisiert und bietet Dir die perfekte Plattform, um Fragen zu stellen, Kontakte zu knüpfen und herauszufinden, wie das Studium Dich auf eine Karriere in der IT-Welt vorbereitet.
Nutze die Chance, einen der innovativsten IT-Studiengänge kennenzulernen und Teil einer engagierten Community zu werden – wir freuen uns auf Dich!
Institutions
 
                            As AI projects gain traction in the humanitarian sector, securing their funding and long-term sustainability remains a critical challenge. This session explores how AI initiatives can align with the SDGs and address pressing climate concerns, while also examining innovative funding models and cross-sector partnerships. From philanthropic investments to public-private collaborations, join us to uncover strategies for ensuring AI projects not only launch successfully but also endure to create lasting, scalable impact in humanitarian efforts. Participants will gain insights into best practices for funding AI projects and explore case studies showcasing successful funding models and partnerships.
Key Learning Objectives:
Target Audience: This event is designed for humanitarian workers at all levels, policymakers, academics, data specialists, communication specialists, and technology experts who are involved in crisis response and interested in the ethical use of AI.
Prerequisites: No prerequisite knowledge is required. Basic understanding of AI and humanitarian principles is recommended.
Institutions
 
                            Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
 
                            Institutions
 
                            Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
 
                            Institutions
 
                            About the lecture
tbd
About the speaker
Jocelyn Maclure is Full Professor of Philosophy and Jarislwosky Chair in Human Nature and Technology at McGill University. His current work addresses various topics in the philosophy of artificial intelligence and in social epistemology. In 2023, he was Mercator Visiting Professor for AI in the Human Context at the University of Bonn. His recent articles appeared in journals such as Minds & Machines, AI & Ethics, AI & Society and Digital Society. He was the president of the Quebec Ethics in Science and Technology Commission—and advisory body of the Quebec Government—from 2017 to 2024. Before turning his attention to the philosophy of AI, he published extensively in moral and political philosophy, including, with Charles Taylor, Secularism and Freedom of Conscience (Harvard University Press (2011). He was elected to the Royal Society of Canada in 2023.
The talk explores the question of whether Artificial Intelligence (AI) can truly create art, or if there is an essential “human factor” in art production. Against the background of AI’s growing capabilities, traditional concepts in art theory like authorship are reconsidered. It is argued that authorship is a necessary condition for art, while aesthetic responsibility is at least a necessary condition for authorship of artworks. Although AI can function as an aesthetic agent, it cannot bear aesthetic responsibility. Therefore, it can neither on its own nor in cooperation with humans be the author of artworks. However, AI is able to produce objects that are in their manifest properties indistinguishable from works of art, I will speak of “fake art.” It will be shown to what extent the massive occurrence of AI-generated fake art has a detrimental effect on art practice.Institutions
 
                            Institutions
 
                            Institutions
 
                            Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
 
                            Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
 
                            Prof. Dr. José van Dijck (Utrecht University, NL)
The growing dominance of two global platform ecosystems has left European countries to rely on American and Chinese digitale infrastructures. This dependency is not just affecting markets and labor relations, but is also transforming social practices, and affecting democracies. While two large ecosystems fight for information control in the global online world, the European perspective on digital infrastructures is focused on regulation rather than on building alternatives. With emerging technologies such as generative AI (ChatGPT, Bard) and geopolitical changes, the infrastructural perspective becomes more poignant. How can Europe achieve sovereignty in the digital world?
This lecture takes up two questions. First, what public values are fundamental to Europe’s platform societies? Values such as privacy, security, transparency, equality, public trust, and (institutional, professional) autonomy are important principles upon which the design of platform architectures should be based. Second, what are the responsibilities of companies, governments, and citizens in building an alternative, sustainable platform ecosystem based on those public values?
Institutions
 
                            Institutions
 
                            Institutions
 
                            Institutions
 
                            Prof. Dr. Sven Ove Hansson (Uppsala University, SE)
tbd
Institutions 
                            Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
 
                            Prof. Dr. Darian Meacham (Maastricht University, NL)
Institutions
 
                            The global mean surface temperature record combining sea surface and near-surface air data is central to understanding climate variability and change. Understanding the past record also helps constrain uncertainty in future climate projections. In my talk, I will present a recent study (Sippel et al., 2024, Nature, doi:10.1038/s41586-024-08230-1) that refines our view of the historical record and explore its implications for near-future climate risk.
Past temperature record: The early temperature record (before ~1950) remains uncertain due to evolving methods, limited documentation, and sparse coverage. Independent reconstructions show that historical ocean temperatures were likely measured too cold by about 0.26 °C compared to land estimates despite strong agreement in other periods. This cold bias cannot be explained by natural variability; multiple lines of evidence (climate attribution, timescale analysis, coastal data, palaeoclimate records) support a substantial cold bias in early ocean records. While overall warming since the mid-19th century is unchanged, correcting the bias reduces early-20th-century warming trends, lowers global decadal variability, and brings models and observations into closer alignment.
Constraining climate risk: I will close my talk by discussing how these findings sharpen near-future temperature projections and our understanding of climate risk; and furthermore how new AI methods may provide an even clearer picture of past climate and near-future climate risk.
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg