Dr. Lucy Osler (University of Exeter, UK)
AI in its many forms is often presented as a driver of “progress”: improving lives, accelerating solutions, and expanding human possibilities. This talk offers a critical framework for assessing such claims. Drawing on a pragmatist understanding of progress, it proposes that genuine progress consists in removing entrenched obstacles to human flourishing – especially where deprivation, exclusion, and domination persist.
Against this standard, I examine how and why AI’s most celebrated promises often misfire. First, the political economy of AI entails massive opportunity costs: While severe deprivation remains cheaply preventable, extraordinary resources are channelled into ever more powerful IT systems. Second, “sustainable AI” narratives often function as a reputational alibi rather than meeting defensible threshold standards of sustainability. Third, some of the most ambitious AI imaginaries carry troubling assumptions about authority and hierarchy, about who decides and who counts.
The critical conclusion is not anti-technology, but firmly pro-justice. It is imperative to resist any potential hypes, to ask critical questions, and to accept responsibility for just regulation and reform as a shared political task. Furthermore, genuine progress needs to begin by taking seriously those at the margins.
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Prof. Dr. Tilo Wesche, Carl von Ossietzky Universität Oldenburg, DE
Artificial intelligence is rapidly becoming a structuring force in contemporary life. From scientific research and public administration to everyday communication and self-understanding, AI systems shape how we act, decide, and relate to one another. Yet their rapid diffusion raises urgent philosophical and political questions: What kind of progress does AI promise and for whom? How do algorithmic systems transform responsibility, agency, and justice? Who is likely to suffer from the watchful eye of AI systems? Can democratic societies meaningfully govern technologies that increasingly govern them?
This semester of Taming the Machines explores these questions from interdisciplinary perspectives in philosophy, political theory, and science and technology studies. We invite you to reflect with us on AI as a site of power and normativity, and examine its role in economic and political ordering, surveillance and security, knowledge production, and the formation of subjectivity. And also to considers more intimate dimensions, such has how interactions with such systems might reshape self-knowledge, dialogue, creativity, and even solitude.
Institutions
Prof. Dr. Azadeh Akbari, Goethe-Universität Frankfurt, DE
This paper develops the concept of uneven datafication, drawing on literature on coloniality, uneven development, and dependency theory. Uneven datafication refers to uneven development in the contemporary political economy of data, showing how global cycles of differentiation and totalisation perpetuate inequality to sustain capitalist structures. Datafication is neither homogeneous nor universal, but marked by colonial continuities, spatial differentiation, and temporal unevenness. Uneven datafication operates through three interrelated dynamics. First, territorialisation, deterritorialisation, and reterritorialisation produce uneven geographies of digital colonial capitalism, from datafied bodies to platform infrastructures and space-based data centres. Second, dispossession enacts spatial, temporal, and dehumanising violence, ranking populations as more or less valuable and enforcing biopower ‘within’ and necropower ‘beyond’. Third, unequal exchange sustains asymmetrical valuation and circulation of data and data labour, enabling Big Tech and core economies to extract surplus value from peripheral regions.
Uneven datafication thus sustains colonial capitalist accumulation through differentiated dispossession and dependency across populations, spaces, and classes.
Prof. Dr. Darian Meacham (Maastricht University, NL)
Institutions
The global mean surface temperature record combining sea surface and near-surface air data is central to understanding climate variability and change. Understanding the past record also helps constrain uncertainty in future climate projections. In my talk, I will present a recent study (Sippel et al., 2024, Nature, doi:10.1038/s41586-024-08230-1) that refines our view of the historical record and explore its implications for near-future climate risk.
Past temperature record: The early temperature record (before ~1950) remains uncertain due to evolving methods, limited documentation, and sparse coverage. Independent reconstructions show that historical ocean temperatures were likely measured too cold by about 0.26 °C compared to land estimates despite strong agreement in other periods. This cold bias cannot be explained by natural variability; multiple lines of evidence (climate attribution, timescale analysis, coastal data, palaeoclimate records) support a substantial cold bias in early ocean records. While overall warming since the mid-19th century is unchanged, correcting the bias reduces early-20th-century warming trends, lowers global decadal variability, and brings models and observations into closer alignment.
Constraining climate risk: I will close my talk by discussing how these findings sharpen near-future temperature projections and our understanding of climate risk; and furthermore how new AI methods may provide an even clearer picture of past climate and near-future climate risk.
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg