ai models

Events

images/02_events/AI%20for%20good.png#joomlaImage://local-images/02_events/AI for good.png?width=800&height=300
Monday, November 24th 2025 | 17:00 - 18:00 p.m

Do AI models know what they know? How explainable AI can help us trust what machines learn

online

AI models can make powerful predictions, from diagnosing diseases to forecasting climate extremes, but understanding why they make those predictions remains one of the biggest challenges in deploying them responsibly.
In this talk, the speaker explores what it really means for an AI system to “know” something. Using a synthetic benchmark dataset inspired by climate prediction tasks, where the true drivers of the target variable are known, the assessment focuses on explainable AI (XAI) tools and their ability to recover these drivers under varying levels of noise and data availability – conditions that mirror many real-world scientific and societal applications. Two clear insights emerge: first, explanations become reliable only when models truly learn signal rather than noise; and second, agreement among different explanation methods or models can serve as a practical indicator of whether an AI system has learned something meaningful or is still operating in a state of “epistemic ignorance”.
These findings offer guidance for building AI systems that are not only powerful but also aware of their own limits, a key step toward AI that supports science and society with confidence and transparency.

Session Objectives
By the end of this session, participants will be able to:

  • Explain the concept of explainable artificial intelligence (XAI) and why explanation reliability depends on whether a model learns signal rather than noise.
  • Describe how controlled synthetic benchmarks with known ground-truth drivers can be used to objectively evaluate the faithfulness of XAI methods.
  • Analyze how variations in data noise and sample size affect both model performance and the reliability of explanations across different XAI methods.
  • Evaluate when and why cross-method or cross-model agreement can serve as a practical proxy for explanation trustworthiness in the absence of ground truth.
  • Apply these insights to critically assess the use of XAI in scientific, medical, or policy-relevant contexts, identifying situations where explanations are likely meaningful versus misleading.

Institutions

  • AI for Good

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein.