Noreen Herzfeld: ‘In AI We Trust: Power, Illusion and Control of Predictive Algorithms’ by Helga Nowotny

In a recent talk at the Markkula Center for Applied Ethics, Shannon Vallor suggested that, while we think of AI as the epitome of the technological future, current AI programs are actually reflections of the past. In In AI We Trust, Helga Nowotny makes a similar case. Her primary concern is that as we become increasingly dependent on predictive algorithms and their illusion of control we will slide into a future that is largely determined by our past, closing off alternative possibilities that might serve us better.

Nowotny divides her argument into four basic points. First, she notes the paradox that, while ‘predictions are obviously about the future, . . . they act directly on how we behave in the present’ (page 5). Based on the past, predictive algorithms influence our actions in the present, which, in turn, determines our future. Thus, while our digital technologies expand our spatial reach, allowing us to communicate almost instantaneously across the globe, they compress our sense of time. We can now look deeply into the past, as our telescopes view deep space, our DNA analyses track the history of our species, and our AI programs allow us to crunch all the resultant data. We predict the future as we track weather systems, incipient traffic jams, and trading on financial markets. This two-faced vision given to us by our technologies dissolves the linearity of time, fusing past, present and future. Nowotny’s concern is that such a dissolution ‘risks creating a closed and deterministic world run by efficient prediction machines whose inner workings remain obscure and whose impact on us goes unquestioned’ (page 51). Too much analysis of the past might close our minds to the openness and unpredictability of the future.

Nowotny next turns to the virtual world of cyberspace, examining how, through smart devices, RFID chips, avatars, gaming, virtual reality, and social media, we have created a digital ‘mirror world.’ It is this mirror world that helps us determine our actions in the ‘real world,’ extending our agency and our reach. But as we increasingly influence and are influenced by what happens on-line we also open ourselves to new levels of surveillance, loss of privacy, and a blurred sense of identity, both individually and collectively. As story-telling creatures, might we lose control of the future if we allow AI to tell our story? Nowotny examines the narrative, originating in the Enlightenment, of continual progress, noting how we have moved from a focus on providing the means for survival to one that dreams of having ‘perfect body and sharper mind, in the aspiration for a longer and healthier life . . . that will bring us closer to immortality’ (page 101). She considers this current iteration of the narrative of progress ‘broken because it cannot change tack and adopt a holistic approach in facing the challenges ahead . . . when seemingly insuperable problems block its way’ (page 106).

Nowotny suggests as a solution a new ethos in emerging AI research and development, one that embraces ambiguity and embodies the wisdom of the humanities and our cultural heritage. While a reasonable, indeed vital, call she is vague about how one might bring this about. How can AI programs embrace ambiguity without losing utility? And whose cultural heritage should these programs embrace? It is already a criticism of AI that it is highly biased toward Western culture as most current programs are trained only on text in the English language. Nowotny rightly notes that current attempts at instilling ethical principles in AI programs reduce ethics to a checklist. Historical experience tells us this is a futile enterprise. However, she does not present much of an alternative.

Writing during the Covid pandemic, Nowotny ends the book with a chapter discussing how the subsequent social distancing measures affected patterns of work and socialization. After her general call for a new approach to AI ethics, this last chapter reads somewhat more like a coda or appendix. Only in the last few pages, does she return to her primary thesis, that, insofar as we trust AI algorithms and allow them to determine our behavior, we self-domesticate our species and risk losing our critical judgement and agency.

Nowotny also begins by introducing the concept of the Anthropocene, which left this reader hoping to find some consideration of the massive ecological effects of AI’s energy and water needs, a consideration nowhere to be found. Nowotny leaves it up to the reader to surmise that she is using Anthropocene in a social, rather than an ecological sense, focusing on how our interaction with the digital changes us rather than our planet. But, as she never really discusses AI’s intersection with and effects on climate change, it might have been better to either clarify this or to omit using the term.

Despite these drawbacks, In AI We Trust is an engaging read, ranging widely over a variety of historical background, social milieus, and possible technological futures. While not an easy read, one can find provocative questions and ponderable insights throughout. I can imagine this book sparking thoughtful debate among graduate students in a variety of social science and humanities disciplines as well as among interested lay readers.

‘In AI We Trust: Power, Illusion and Control of Predictive Algorithms’ by Helga Nowotny was published in 2024 by Polity. (ISBN 978-1-50-956546-7). 200pp.


 

 

Noreen Herzfeld is the Director of Benedictine Spirituality and the Environment at St. John’s School of Theology and Seminary, Collegeville MN. She is also Senior Research Associate at the Institute for Philosophical and Religious Studies, ZRS Koper, Slovenia.