The AI Arms Race Nobody Voted For

In May 2023, a photo circulated on Chinese social media showing a new fighter jet nobody had seen before. Within hours, Western defense analysts were scrambling. The aircraft — later identified as the J-35A — was not supposed to exist yet. China had closed a gap that the Pentagon had estimated would take another decade to close.

Nobody voted on this. No parliament debated it. No public was consulted. The decision to accelerate China’s fifth-generation fighter program was made inside a system that does not require public approval — and the response from Washington, from Brussels, from Tokyo, was made the same way.

This is what the AI arms race looks like. Not a dramatic moment. Not a declaration. A photograph on social media, a scramble in a think tank, and a budget line that quietly doubles.

The Race That Started Before Anyone Named It

The competition between the United States and China over artificial intelligence did not begin with a speech or a strategy document. It began with a realization — arriving separately in Washington and Beijing around 2017 — that AI was not simply a commercial technology. It was a source of military, economic, and political power unlike anything since the nuclear age.

In September 2017, Vladimir Putin told Russian students that whoever leads in AI “will be the ruler of the world.” The line was widely quoted. Less noticed was that by the time he said it, both Washington and Beijing had already reached the same conclusion and were acting on it.

China’s State Council released its Next Generation Artificial Intelligence Development Plan in July 2017 — a detailed roadmap for becoming the world’s leading AI power by 2030. The United States had no equivalent national strategy until 2019. That two-year gap, in a field moving as fast as AI, was significant.

What AI Actually Does in a Military Context

The phrase “AI arms race” conjures images of killer robots and autonomous terminators. The reality is simultaneously more mundane and more dangerous.

The most immediate military applications of AI are not weapons. They are systems that process information faster than humans can — surveillance networks that identify faces in crowds, logistics systems that predict equipment failures before they happen, intelligence platforms that correlate data from thousands of sources to identify patterns no human analyst could detect.

These capabilities matter enormously in conflict. The side that sees the battlefield more clearly, processes intelligence faster, and anticipates enemy movements more accurately wins — not necessarily because its weapons are better, but because its decisions are better informed and more quickly executed.

The second tier of applications — autonomous weapons, AI-guided munitions, drone swarms — is moving from experimental to operational faster than most public reporting acknowledges. Israel’s use of AI-assisted targeting systems in Gaza generated significant controversy in 2023 and 2024, with reports suggesting that AI tools were being used to generate target lists at a speed and volume that human oversight could not meaningfully review.

This is the frontier where the AI arms race is most consequential — and most dangerous.

The Chip War Underneath the AI War

In October 2022, the Biden administration issued export controls restricting the sale of advanced semiconductors to China. The decision was unprecedented in scope — effectively attempting to freeze China’s access to the most advanced chips required for AI development.

The move was a recognition that the AI arms race was ultimately a hardware race. The most powerful AI systems require specialized chips — primarily produced by Nvidia — that can process the enormous computational loads involved in training large AI models. Without access to these chips, China’s ability to develop frontier AI systems would be severely constrained.

China’s response was immediate and strategic. Beijing accelerated investment in domestic semiconductor production, funneling hundreds of billions of yuan into companies attempting to develop chips that could substitute for American and Taiwanese products. The effort has not yet succeeded in closing the gap at the frontier. But the gap is narrowing, and the pace of Chinese semiconductor development has surprised Western analysts who predicted it would take much longer.

Meanwhile, the export controls produced unintended consequences. Allied countries — the Netherlands, Japan, South Korea — were pressured to align with American restrictions, straining relationships with China that were economically important to them. And companies in the gray zones of the restrictions found creative ways to route chips through third countries, limiting the controls’ effectiveness.

The chip war is not over. It has barely started.

The Autonomy Threshold

Somewhere in the development curve of military AI lies a threshold that nobody has formally defined but everyone in the field is aware of: the point at which autonomous systems can make lethal decisions faster than any human can meaningfully authorize them.

Current doctrine in most Western militaries requires a human “in the loop” — a person who authorizes each use of lethal force. In practice, as systems become faster and more autonomous, the human in the loop becomes a rubber stamp rather than a meaningful check. A drone swarm operating at machine speed, engaging targets identified by AI, cannot wait for a human to review each engagement without losing the tactical advantage that speed provides.

The pressure to remove or weaken human oversight comes not from malice but from competitive logic. If your adversary’s autonomous systems can engage faster than yours, you lose. The race to the autonomy threshold is driven by the same logic as every arms race in history: the fear that restraint will be unilateral.

International attempts to negotiate limits on autonomous weapons have stalled at the UN for over a decade. The states most capable of developing these systems — the United States, China, Russia, Israel — have the least incentive to agree to restrictions that would constrain their own advantage.

The Surveillance Export Problem

The AI arms race has a dimension that extends far beyond great power competition. The surveillance technologies developed for military and intelligence purposes are being exported — primarily by China, but also by Western companies — to governments around the world that use them to monitor, control, and suppress their own populations.

Huawei’s “safe city” systems, deployed in dozens of countries across Africa, Asia, and Latin America, provide governments with facial recognition networks, communication monitoring capabilities, and data integration platforms originally developed for Chinese domestic surveillance. The technology is sold as crime prevention infrastructure. It functions as political control infrastructure.

This is the AI arms race’s least visible but most pervasive consequence — the global spread of surveillance capability to governments that would not have been able to develop it independently, accelerating a worldwide trend toward authoritarian control that the technology makes newly possible.

Europe’s Position: Dependent and Aware

Europe occupies an uncomfortable position in the AI arms race. It has significant AI research capability — DeepMind in London, major university programs across the continent — but lacks the scale of investment, the data infrastructure, and the defense-industrial integration that characterizes American and Chinese AI development.

European AI regulation, embodied in the EU AI Act, has been praised as a serious attempt to govern the technology. It has also been criticized as a competitive handicap — a set of constraints that European companies must comply with while their American and Chinese competitors operate under fewer restrictions.

The deeper problem is dependency. Europe’s most advanced AI systems run on American cloud infrastructure, American chips, and American-developed foundational models. In a scenario where the AI arms race produces genuine strategic decoupling between the United States and China, Europe has not yet determined which side of that decoupling it sits on — or whether it can build sufficient independence to avoid the choice.

The Question Nobody Is Asking

The AI arms race is proceeding on the assumption that winning it is both possible and desirable — that the state which achieves AI dominance will translate that dominance into lasting strategic advantage.

This assumption deserves scrutiny.

Nuclear weapons produced dominance for approximately four years, from 1945 to 1949, before the Soviet Union tested its first bomb. The technology that was supposed to deliver permanent American strategic superiority instead produced a decades-long standoff in which both sides lived under the permanent threat of annihilation.

AI may follow a similar trajectory. The lead times between frontier capability and adversary replication are shrinking. The diffusion of AI technology to non-state actors — terrorist organizations, criminal networks, proxy militias — is already underway and cannot be stopped by export controls targeting state competitors.

The race is real. The stakes are real. But the assumption that winning it will produce security rather than a more complex and dangerous form of instability has not been tested — and by the time it is, the race will already have been run.

If this analysis interests you, read next: How Algorithms Decide What You Believe About War

Leave a Comment

Your email address will not be published. Required fields are marked *