Recent advances in machine learning have revolutionized dynamical modeling, yet AI weather and climate models often suffer from instability and unphysical drift when integrated over long timescales. This talk unifies three complementary works addressing this challenge. First, we present a theoretical eigenanalysis of neural autoregressive models that establishes a semi-empirical framework linking inference-time stability to the spectrum of the model’s Jacobian. This analysis reveals how integration-constrained architectures suppress unstable eigenmodes and enable predictable error growth. Building on this foundation, we identify spectral bias—a universal tendency of deep networks to under-represent high-wavenumber dynamics—as the root cause of instability in AI weather models. We demonstrate how higher-order integration schemes and spectral regularization, implemented in the FouRKS framework, mitigate this bias and produce century-scale stable emulations of turbulent flows. Finally, we translate these theoretical insights into practice with LUCIE-3D, a data-driven climate emulator trained on reanalysis data that captures forced responses to CO₂, reproduces stratospheric cooling and surface warming, and remains computationally efficient. Together, these results chart a rigorous pathway from mathematical theory to physically consistent AI climate models capable of stable, interpretable, and trustworthy long-term Earth-system emulation.
Institutions