by Yori Kamphuis and Stefan Leijnen*
“Whoever becomes the leader in AI [or artificial intelligence] will become the ruler of the world,” Vladimir Putin once famously said.
In the current geopolitical theater, a global race towards leveraging artificial intelligence (AI) should come as no surprise. The United States has made substantial investments in AI to extend its role as a global superpower, while other economies also want a shot at becoming a top contender or, failing that, not falling too far behind.
China announced in 2017 that it wants to lead the world in AI by 2030, strategically allocating funds guided by a national strategy for AI. China is already closing in on scientific AI publications, and has been filing more AI patent applications than any other country since 2013. The US and China both outpace the EU, which follows at a distance in investments and output, with Israel, India, Russia and other economic regions lagging even further behind.
Let’s explore this arms race analogy and ask what it means to be ahead in this race for AI dominance.
In the Cold War, the race for nuclear arms could lead either to a state of stability in the face of mutually assured destruction, or mutual destruction itself. However, in this present-day technological arms race, there is no clear race track or finish line. Whether you’re ahead or behind depends on the direction you want to be heading, or the destination you have in mind. With respect to where you’re going to end up in an open-ended future, the direction you’re facing is more important than how fast you’re going.
The three kinds of AI
AI dominance can take on many forms. We tend to think of AI as a technology, but it is first and foremost an ambition to create systems that display intelligent behavior. We can roughly identify three technological manifestations of this ambition. First, programmed AI that humans design in detail with a particular function in mind, like (most) manufacturing robots, virtual travel agents and Excel sheet functions. Second, statistical AI that learns to design itself given a particular predefined function or goal. Like humans, these systems are not designed in detail and also like humans, they can make decisions but they do not necessarily have the capability to explain why they made those decisions. The third manifestation is AI-for-itself: a system that can act autonomously, responsibly, in a trustworthy style, and may very well be conscious, or not. We don’t know, because such a system does not yet exist.
The past decade has seen the unfolding of a global AI arms race fueled by statistical AI. It is relevant to note here that the word statistics stems from state: the science dealing with data about the condition of a state or community. The modern rise of AI is linked to this original meaning, which helps explain why it so often raises profound ethical questions about the relation between individuals and institutions. Census data was historically used by the state to create public policies by monitoring a population that would be impossible to track on the individual level, but which can be modelled with sufficient level of detail through empirical sampling. Uncoincidentally, this approach is also followed by market-driven corporations, institutions and other organisations that use statistical AI to model and monitor individuals online and offline.
This brings us back to the geopolitical stage, where China’s state-driven approach to leveraging emerging technologies is often contrasted with a market-driven approach to technology development in the United States. In this frame, the EU and other economic regions are left to decide how to align themselves on this state-market axis. However, this frame is misleading as it overvalues the role of economical investment policies and undervalues the critical role of data ownership for statistical AI.
A more productive frame would therefore contrast the state- and market-driven approaches with a citizen-driven approach to AI, where the rights of the individual are central to how and why AI is used. The EU has shown to adopt this ideal, first with its GDPR directive and now again with preliminary steps towards directives for Trustworthy AI. In doing so, the EU creates a clear distinction between the rights of the individual and the ambitions of the organization, protecting its citizens against involuntary modelling and monitoring.
Continuing on this journey, a next step for the EU should be to develop a grand narrative where ethical considerations such as privacy, transparency and accountability are foundational for sustainable, healthy and productive relationships between individuals and organizations.
There are no winners in an arms race, only those who outgrow it. The race for AI dominance spills over into a more profound question of identity, asking in what kind of society we choose to live.
The answer to this question should in itself provide the necessary justification for substantial investments in the citizen-driven approach to AI, pushing the gas pedal in the right direction. The EU can be a global leader in AI if it decides to use values as a steering wheel, not as a brake. Then it is only a matter of time before others will join the race on the right track.
*AI Researcher, yorikamphuis.nl and Professor of AI, Utrecht University of Applied Sciences
**first published in: www.weforum.org