Weather forecasting is critical for a range of human activities including transportation, agriculture, industry, as well as the safety of the general public. Over the last two years, machine learning models have shown that they have the potential to transform the complex weather prediction pipeline, but current approaches still rely on numerical weather prediction (NWP) systems, limiting forecast speed and accuracy. In this talk, some of the background on these developments will be given. A new generation of machine learning model will then be introduced which can replace the entire operational NWP pipeline. Aardvark Weather, an end-to-end data-driven weather prediction system, ingests raw observations and outputs global gridded forecasts and local station forecasts. Further, it can be optimised end-to-end to maximise performance over quantities of interest. It will be shown that the system outperforms an operational NWP baseline for multiple variables and lead times for gridded and station forecasts. Finally, the talk will end by discussing how these ideas might develop over the next few years, including their application to multiple parts of the Earth system on multiple time-scales, and their potential impact on climate modelling.
Institutions
Recent advances in machine learning have revolutionized dynamical modeling, yet AI weather and climate models often suffer from instability and unphysical drift when integrated over long timescales. This talk unifies three complementary works addressing this challenge. First, we present a theoretical eigenanalysis of neural autoregressive models that establishes a semi-empirical framework linking inference-time stability to the spectrum of the model’s Jacobian. This analysis reveals how integration-constrained architectures suppress unstable eigenmodes and enable predictable error growth. Building on this foundation, we identify spectral bias—a universal tendency of deep networks to under-represent high-wavenumber dynamics—as the root cause of instability in AI weather models. We demonstrate how higher-order integration schemes and spectral regularization, implemented in the FouRKS framework, mitigate this bias and produce century-scale stable emulations of turbulent flows. Finally, we translate these theoretical insights into practice with LUCIE-3D, a data-driven climate emulator trained on reanalysis data that captures forced responses to CO₂, reproduces stratospheric cooling and surface warming, and remains computationally efficient. Together, these results chart a rigorous pathway from mathematical theory to physically consistent AI climate models capable of stable, interpretable, and trustworthy long-term Earth-system emulation.
Institutions
Stable mobile communication requires understanding radio propagation at specific areas, especially when using high-frequency bands like millimeter waves, which are highly affected by environmental factors such as buildings. Direct measurement of propagation characteristics across areas and frequencies is impractical due to cost and effort. To address this, AI/ML-based methods can estimate area-wide propagation using limited measurement data and environmental information like building layouts. Effective application of this approach involves not only building AI/ML models but also selecting the most relevant data to improve estimation accuracy. This challenge invites participants to explore AI/ML model and data selection methods using provided propagation loss data and 3D maps.
In this webinar, the top three teams from those who participated in the challenge will present their proposed approaches. Various strategies have been suggested to solve problems in the challenge, and we believe that participants will gain new insights into the application of AI/ML for radio wave propagation estimation. Additionally, KDDI Research has allocated a prize of 3,000 CHF for the challenge. Along with an explanation of the evaluation results, the prize amounts will also be announced during the session. Please note that the technical evaluation results for the submitted teams are available under the “Results” tab at the following URL: https://challenge.aiforgood.itu.int/match/matchitem/112.
Institutions
Despite the huge success of foundation models across fields, they still suffer from hallucinations and can produce physically inconsistent outputs. To leverage foundation models for climate science, it is critical to integrate first principles and physical laws to the learning and reasoning of these models. In this talk, I will discuss our on-going effort to ground foundation models, including diffusion models and large language models for climate science. In particular, I will discuss dynamics-informed diffusion models for emulating complex fluids and an adaptive framework for LLM agents to use scientific tools. I will demonstrate the use cases of our methods on building an autonomous LLM agent as a climate co-scientist.
Learning Objectives:
By the end of this session, participants will be able to:
Institutions
The Hamburg Node of the Digital Earths Global Hackathon is part of the World Climate Research Programme (WCRP) Global km-Scale Hackathon, an initiative designed to advance the analysis and development of high-resolution Earth-system models.
The Hamburg event, taking place from May 12 to 16, 2025, will gather participants to collaborate on hacking, bug-fixing, and learning in a dynamic, hands-on environment. This hackathon is part of the larger WCRP effort to push the boundaries of climate system modeling and digital innovation globally.
For more details about the global hackathon and its objectives, please visit the official WCRP event page.
Registration closes on April 21, 2025 and a registration fee of 150€ is asked. Quicklink to external registration website
Program Rough agenda (A detailed program will be shared closer to the event)
Welcome to the 2025 edition of the WarmWorld-ESiWACE3 Summer School, in the old town of Lauenburg near Hamburg!
The Summer School will give an insight into ICON , one of the state-of-the-art weather and climate science models. The students will learn basic meteorology concepts and will be invited to tackle code challenges using intermediate and advanced approaches from software engineering, high-performance computing and data analysis, all under the guidance of experienced lecturers from these various fields.
Important dates
Academic Programme:
Invited professors and computational scientists from partner institutions contribute to a 10-day programme of 60+ hours of lectures and hands-on exercises, covering two main themes: climate modelling and modern scientific computing, which span over a variety of topics: here
Institutions
Satellite remote sensing enables a wide range of downstream applications, including habitat mapping, carbon accounting, and strategies for conservation and sustainable land use. However, satellite time series are voluminous and often corrupted, making them challenging to use: the scientific community’s ability to extract actionable insights is often constrained by the scarcity of labelled training datasets and the computational burden of processing temporal data.
The presentation will introduce TESSERA (Time-series Embeddings of Surface Spectra for Earth Representation and Analysis), an open foundation model that preserves spectral-temporal signals in 128-dimensional latent representations at 10-meter resolution globally. The model uses self-supervised learning to summarise petabytes of Earth observation data. TESSERA is shown to be label-efficient and closely matches or outperforms state-of-the-art alternatives. By preserving temporal phenological signals that are typically lost in conventional approaches, TESSERA enables new insights into ecosystem dynamics, agricultural food systems, and environmental change detection. Moreover, the open-source implementation supports reproducibility and extensibility, while the privacy-preserving design allows researchers to maintain data sovereignty. To current knowledge, TESSERA is unprecedented in its ease of use, scale, and accuracy: no other foundation model provides analysis-ready outputs, is open, and delivers global, annual coverage at 10m resolution using only spectral-temporal features at pixel level.
This session is part of a two-session series, providing the theoretical introduction to TESSERA. The second session, a hands-on workshop, will be held on February 2nd, 2026.
Institutions
Willkommen zur ITMC-Conference 2025
In diesem Jahr steht das Thema „Resiliente IT - Resilienz als Schlüssel für ein krisenfestes und zukunftsfähiges Unternehmen im Spannungsfeld von politischen Entwicklungen, Fachkräftemangel, Klimakrise, Mangel an Diversität und wirtschaftlichen Herausforderungen“ im Zentrum der ITMC-Conference.
Die ITMC-Conference ist die ideale Gelegenheit für Studieninteressierte, die sich für den Masterstudiengang IT-Management und -Consulting (ITMC) an der Universität Hamburg begeistern. Am 2. Juni 2025 laden wir Dich in den Lichthof der Staats- und Universitätsbibliothek ein, um einen authentischen Einblick in die Inhalte, Projekte und das Umfeld des Studiengangs zu gewinnen.
Erlebe spannende Keynotes und Vorträge von Expert:innen aus Wissenschaft und Praxis, informiere Dich über aktuelle Trends im IT-Management und Consulting und tausche Dich direkt mit Studierenden, Alumni und Lehrenden aus. Die Konferenz wird von Studierenden des ITMC-Studiengangs organisiert und bietet Dir die perfekte Plattform, um Fragen zu stellen, Kontakte zu knüpfen und herauszufinden, wie das Studium Dich auf eine Karriere in der IT-Welt vorbereitet.
Nutze die Chance, einen der innovativsten IT-Studiengänge kennenzulernen und Teil einer engagierten Community zu werden – wir freuen uns auf Dich!
Institutions
Climate variability and weather extremes pose profound challenges for prediction, preparedness, and resilience. Traditional approaches often rely on predefined indices or supervised learning methods, which can overlook unexpected patterns or reinforce biases inherent in labeled datasets. This keynote explores how unsupervised learning techniques can uncover hidden patterns in high-dimensional climate data. I will highlight recent innovations that adapt established methods to reveal properties not captured by conventional architectures, offering new perspectives on modes of variability and extreme events. For instance, a knowledge-guided autoencoder can disentangle distinct Pacific climate modes with differing spectral signatures, while a custom hyperparameter search can optimize self-organizing maps to produce smooth, interpretable pathways among weather regimes. Together, these advances help uncover processes and mechanisms that may underlie established climate and weather phenomena. Ultimately, unsupervised learning provides a powerful lens for scientific discovery, with implications for understanding, prediction, and decision-making in a changing climate.
Institutions
Despite the global expansion of mobile and internet networks, critical connectivity gaps persist – particularly in areas most vulnerable to natural hazards and other emergencies. Reaching these populations with timely early warnings remains a pressing challenge for achieving the UN Early Warnings for All (EW4All) initiative.
Approximately 97.9% of the world’s population is covered by mobile network technology, gaps remain, leaving millions without reliable access to life-saving communications.
To address this challenge, the Telecommunication Development Bureau (BDT) of the International Telecommunication Union (ITU), in partnership with Microsoft AI for Good Lab, Planet Labs and the Institute of Health Metrics and Evaluation (IHME) at the University of Washington, has developed the Early Warning Connectivity Map (EWCM). This geospatial tool integrates connectivity and coverage datasets with high-resolution population density and hazard exposure datasets to produce granular, subnational maps that identify connectivity ‘coldspots’.
The EWCM helps ICT regulators and national stakeholders to visualize where connectivity investments or alternative communication solutions – such as radio or satellite – are needed to ensure access to early warnings. This session will showcase the EWCM using a case study from Liberia, illustrating its value for strengthening early warning systems nationally. Learn more
Institutions
As AI projects gain traction in the humanitarian sector, securing their funding and long-term sustainability remains a critical challenge. This session explores how AI initiatives can align with the SDGs and address pressing climate concerns, while also examining innovative funding models and cross-sector partnerships. From philanthropic investments to public-private collaborations, join us to uncover strategies for ensuring AI projects not only launch successfully but also endure to create lasting, scalable impact in humanitarian efforts. Participants will gain insights into best practices for funding AI projects and explore case studies showcasing successful funding models and partnerships.
Key Learning Objectives:
Target Audience: This event is designed for humanitarian workers at all levels, policymakers, academics, data specialists, communication specialists, and technology experts who are involved in crisis response and interested in the ethical use of AI.
Prerequisites: No prerequisite knowledge is required. Basic understanding of AI and humanitarian principles is recommended.
Institution
Earth system science increasingly relies on machine learning to analyze complex, multivariate, and spatiotemporal data. However, the validity of these models critically depends on the assumption that training and deployment data share similar statistical properties – a condition often violated in real-world environmental applications. This presentation addresses the risks associated with non-stationary training data distributions, arising from climate change, evolving land use, or sensor shifts over time. We show how such distribution shifts can lead to degraded model performance, biased predictions, and misleading scientific conclusions. Through different examples, we illustrate the mechanisms and consequences of non-stationarity. We then discuss methodological solutions, including domain adaptation, continual learning, and uncertainty quantification techniques, that help mitigate these effects and improve model robustness. By combining insights from machine learning and earth system science, this talk aims to foster awareness of distributional risks and promote the development of adaptive, interpretable, and trustworthy models for understanding and predicting Earth’s dynamic systems.
Institutions
The global mean surface temperature record combining sea surface and near-surface air data is central to understanding climate variability and change. Understanding the past record also helps constrain uncertainty in future climate projections. In my talk, I will present a recent study (Sippel et al., 2024, Nature, doi:10.1038/s41586-024-08230-1) that refines our view of the historical record and explore its implications for near-future climate risk.
Past temperature record: The early temperature record (before ~1950) remains uncertain due to evolving methods, limited documentation, and sparse coverage. Independent reconstructions show that historical ocean temperatures were likely measured too cold by about 0.26 °C compared to land estimates despite strong agreement in other periods. This cold bias cannot be explained by natural variability; multiple lines of evidence (climate attribution, timescale analysis, coastal data, palaeoclimate records) support a substantial cold bias in early ocean records. While overall warming since the mid-19th century is unchanged, correcting the bias reduces early-20th-century warming trends, lowers global decadal variability, and brings models and observations into closer alignment.
Constraining climate risk: I will close my talk by discussing how these findings sharpen near-future temperature projections and our understanding of climate risk; and furthermore how new AI methods may provide an even clearer picture of past climate and near-future climate risk.
Institutions
Anselm Fehnker, Senior AI Projektmanager am Artificial Intelligence Center Hamburg (ARIC) e.V.
Louisa Rockstedt, Projektmanagerin am Artificial Intelligence Center Hamburg (ARIC) e.V.
Künstliche Intelligenz gilt als Schlüsseltechnologie der Zukunft. Doch was kann sie beitragen, wenn es um eines unserer wichtigsten und zugleich am stärksten bedrohten Güter, dem Wasser, geht? Und worauf müssen wir achten, um technologische Innovation mit ökologischer Verantwortung zu verbinden?
In diesem Vortrag werfen wir einen Blick auf die Schnittstelle zwischen KI, Wasser und Klimakrise. Anhand konkreter Beispiele aus Forschung und Praxis beleuchten wir verschiedene Perspektiven: Wie kann KI helfen, Überschwemmungen, Dürren und Wasserverschmutzung frühzeitig zu erkennen? Können smarte Technologien zur effizienteren Nutzung und Einsparung von Wasser beitragen? Und welche Risiken bringt KI selbst für Umwelt und Wasser mit sich?
Wir freuen uns darauf, mit euch im schönen Ambiente der Speicherstadt sowohl die Potenziale als auch die Herausforderungen von KI im Kontext der ökologischen Transformation zu reflektieren und gemeinsam zu diskutieren.
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg