Willkommen zur ITMC-Conference 2025
In diesem Jahr steht das Thema „Resiliente IT - Resilienz als Schlüssel für ein krisenfestes und zukunftsfähiges Unternehmen im Spannungsfeld von politischen Entwicklungen, Fachkräftemangel, Klimakrise, Mangel an Diversität und wirtschaftlichen Herausforderungen“ im Zentrum der ITMC-Conference.
Die ITMC-Conference ist die ideale Gelegenheit für Studieninteressierte, die sich für den Masterstudiengang IT-Management und -Consulting (ITMC) an der Universität Hamburg begeistern. Am 2. Juni 2025 laden wir Dich in den Lichthof der Staats- und Universitätsbibliothek ein, um einen authentischen Einblick in die Inhalte, Projekte und das Umfeld des Studiengangs zu gewinnen.
Erlebe spannende Keynotes und Vorträge von Expert:innen aus Wissenschaft und Praxis, informiere Dich über aktuelle Trends im IT-Management und Consulting und tausche Dich direkt mit Studierenden, Alumni und Lehrenden aus. Die Konferenz wird von Studierenden des ITMC-Studiengangs organisiert und bietet Dir die perfekte Plattform, um Fragen zu stellen, Kontakte zu knüpfen und herauszufinden, wie das Studium Dich auf eine Karriere in der IT-Welt vorbereitet.
Nutze die Chance, einen der innovativsten IT-Studiengänge kennenzulernen und Teil einer engagierten Community zu werden – wir freuen uns auf Dich!
Institutions
As AI projects gain traction in the humanitarian sector, securing their funding and long-term sustainability remains a critical challenge. This session explores how AI initiatives can align with the SDGs and address pressing climate concerns, while also examining innovative funding models and cross-sector partnerships. From philanthropic investments to public-private collaborations, join us to uncover strategies for ensuring AI projects not only launch successfully but also endure to create lasting, scalable impact in humanitarian efforts. Participants will gain insights into best practices for funding AI projects and explore case studies showcasing successful funding models and partnerships.
Key Learning Objectives:
Target Audience: This event is designed for humanitarian workers at all levels, policymakers, academics, data specialists, communication specialists, and technology experts who are involved in crisis response and interested in the ethical use of AI.
Prerequisites: No prerequisite knowledge is required. Basic understanding of AI and humanitarian principles is recommended.
Institutions
Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
Institutions
Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Institutions
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Institutions
Institutions
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg