AI models can make powerful predictions, from diagnosing diseases to forecasting climate extremes, but understanding why they make those predictions remains one of the biggest challenges in deploying them responsibly.
In this talk, the speaker explores what it really means for an AI system to “know” something. Using a synthetic benchmark dataset inspired by climate prediction tasks, where the true drivers of the target variable are known, the assessment focuses on explainable AI (XAI) tools and their ability to recover these drivers under varying levels of noise and data availability – conditions that mirror many real-world scientific and societal applications. Two clear insights emerge: first, explanations become reliable only when models truly learn signal rather than noise; and second, agreement among different explanation methods or models can serve as a practical indicator of whether an AI system has learned something meaningful or is still operating in a state of “epistemic ignorance”.
These findings offer guidance for building AI systems that are not only powerful but also aware of their own limits, a key step toward AI that supports science and society with confidence and transparency.
Session Objectives
By the end of this session, participants will be able to:
Institutions