Virginia Dignum on The AI Paradox

Interview

Virginia Dignum on The AI Paradox

Scroll to Article Content

Artificial intelligence will shape our future in unforeseen ways, and it is easy to fall into the trap of thinking that it could someday dictate the terms of our very existence. But the fact is, the more that AI can do, the more it underscores the irreplaceable qualities of human creativity, empathy, and moral reasoning. This is one of the eight paradoxes of AI that Virginia Dignum explores in this revelatory book. Drawing on her decades of experience in AI research and governance, Dignum cuts through the hype and sensationalism that often surround AI and reveals why the most profound questions it raises are not about technology but ourselves.


What motivated you to write The AI Paradox now?

Virginia Dignum: I have been working in artificial intelligence since the late 1980s, long before AI was something people used and talked about in daily life or worried about in headlines. Over the years, I have seen repeated cycles of excitement, disappointment, hype, and fear. My main conclusion is that while the technology has changed dramatically, many of the underlying questions have not.

Today, AI is no longer confined to research labs or specialized industries; it shapes public services, workplaces, political decision-making, and global power relations. Yet the way we talk about it often swings between exaggerated promises and apocalyptic warnings. Both extremes obscure something essential: AI is not an autonomous force acting upon us, but a set of systems designed, deployed, and governed by people.

I wrote The AI Paradox to help reframe the conversation. Rather than asking what AI will inevitably do to us, I wanted to explore the tensions it exposes, e.g. between efficiency and control, innovation and justice, or intelligence and responsibility. Drawing on my decades of experience in research, industry, and policy, the book invites readers to engage critically with these tensions and to recognize that the future of AI remains a matter of collective choice, not technological destiny.

Your book is structured around paradoxes rather than predictions or roadmaps. Why are paradoxes a good way to think about AI?

VD: Predictions about AI tend to age badly. They either exaggerate what is imminent or underestimate the social consequences of what is already here. Paradoxes, by contrast, capture something more durable: the tensions and contradictions that persist regardless of technical progress.

Paradoxes have always fascinated me because they expose the limits of how we normally think. They force us to question assumptions that seem obvious and reveal that reality is often more complex than our first intuitions suggest. Some paradoxes involve genuine contradictions, while others only appear contradictory and, when examined more closely, uncover a deeper truth. In both cases, they are powerful tools for reflection.

AI lends itself particularly well to this way of thinking. Many of the claims we hear about AI are framed as simple oppositions: promise versus threat, automation versus control, efficiency versus ethics. But these are rarely either-or choices. Instead, they coexist in tension. The more AI can do, the more we depend on human judgment; the more we rely on data-driven systems, the harder it becomes to agree on what intelligence or fairness actually mean, and so on.

By organizing the book around paradoxes, I am not trying to resolve these tensions once and for all. Rather, I want to make them visible. Paradoxes help us resist easy answers and technological determinism. They remind us that certainty is often misleading, and that understanding AI requires engaging with contradiction, ambiguity, and responsibility. In that sense, paradoxes are not obstacles to clarity. They are a way of achieving it.

You argue that the more capable AI becomes, the more it highlights that humans cannot be replaced. What do you see as the most misunderstood human capability in today鈥檚 AI debates?

VD: What is most misunderstood is not creativity or empathy individually, but the way human intelligence integrates seamlessly social understanding, moral judgment, and responsibility. Human intelligence is not just about solving problems efficiently; it is about understanding what matters, to whom, and why.

AI systems can recognize patterns, optimize outcomes, and even mimic aspects of human expression. But they do not understand this type of meaning, nor do they bear responsibility for consequences. Humans do. This difference becomes more visible, not less, as AI grows more capable.

Ironically, the more we rely on AI, the more important uniquely human capacities become: contextual judgment, ethical reasoning, and the ability to take responsibility when things go wrong. These capacities are not enhancements we can add to AI; they are the very foundation of intelligence that remains uniquely human.

You devote an entire chapter to the difficulty of agreeing on what AI actually is. Why does this lack of agreement matter so much for governance and regulation?

VD: Because what we think AIis determines what we think can, and should, be done about it.

When AI is vaguely defined, it becomes an 鈥渆mpty signifier鈥: a term that sounds authoritative but means different things to different people. This ambiguity is not innocent. It allows powerful actors to shift narratives, avoid accountability, and present certain technological trajectories as inevitable.

Effective governance requires clarity. This does not mean a single rigid definition, but a shared understanding of the different ways AI functions: as technology, as decision-making infrastructure, and as part of a broader socio-technical ecosystem. Without this, regulation risks being either toothless or misguided.

Disagreement about AI is inevitable. But confusion about what we are governing is not.

Many people hope AI will make society more fair by removing human bias, yet you argue that 鈥渓ess bias is not always more justice.鈥 Can you explain this tension?

VD: Bias is inevitable. It is not simply a technical flaw that can be engineered away with better data or smarter algorithms. All systems, human or artificial, reflect choices about what to measure, what to optimize, and what to ignore. Those choices are normative, not neutral.

Justice is therefore not a statistical property. It cannot be reduced to balancing error rates or equalizing outcomes across groups. An AI system may be designed to minimize measurable bias and still produce unjust results, because it relies on proxies that fail to capture lived realities, historical disadvantages, or structural inequalities. Treating everyone 鈥渢he same鈥 often entrenches injustice when people start from profoundly unequal positions.

Crucially, deciding what counts as bias, and when it is acceptable or unacceptable, already involves moral judgment. Justice requires interpretation, contextual understanding, and the capacity to question whether the rules themselves are fair. These are not properties of data or models; they are human responsibilities. Reducing bias matters, but justice cannot be automated. Confusing the two leads to obscuring, rather than addressing, inequality.

A recurring theme in the book is power: who controls AI, who benefits from it, and who bears the risks. What worries you most about the current concentration of AI power?

VD: What worries me most is not the technology itself, but the asymmetry of influence surrounding it. A small number of actors, primarily large private companies, have disproportionate control over how AI is developed, deployed, and framed in public discourse.

This concentration of power shapes not only markets, but also narratives: what problems AI is meant to solve, what risks are emphasized or downplayed, and whose voices are heard in governance debates. When decision-making power is centralized, democratic oversight becomes harder, and meaningful public participation is sidelined.

AI amplifies existing power structures unless we actively intervene. That is not a technological problem, it is a political one.

You are skeptical that superintelligence is the real problem we should be worried about. If not superintelligent machines, what is the central risk AI poses to society today?

VD: The central risk is not that machines will outthink us, but that humans will abdicate responsibility.

By framing AI as an autonomous decision-maker, especially in high-stakes domains, we risk normalizing the idea that accountability can be delegated to systems that cannot understand values, consequences, or justice. The danger lies in granting authority without responsibility.

Our most pressing challenges, for example climate change, inequality, democratic erosion, are not problems of insufficient intelligence or too little data. They are problems of coordination, governance, and collective will. No amount of superintelligence will solve them if we avoid making difficult political and moral choices.

The paradox is that the more we chase technological solutions, the more urgently we need human judgment, cooperation, and courage.


Virginia Dignum is an internationally recognized expert in AI ethics and policy who has led initiatives for the European Commission, the United Nations, the World Economic Forum, UNESCO, and UNICEF, among others. She is professor of responsible artificial intelligence at Ume氓 University in Sweden and the author of Responsible Artificial Intelligence.