The Future of War (III): Will Tomorrow's Enemy be Human?

the future of war Aug 25, 2025
Tomorrow's war: Race against AI?

We can all see it: tomorrow’s wars will be driven by AI. The arms race has already begun. The big question is: who will control AI?

We know the cliché: Americans innovate, Chinese copy, Europeans regulate. In 2024, the emergence of DeepSeek (1) shattered this myth. In AI and robotics, China no longer imitates. It innovates brilliantly!

With their new AI plan, announced by Trump (2), the U.S. has reacted forcefully: restricting exports of advanced chips, limiting foreign investments, and multiplying controls. It's probably already too late: As Nvidia’s founder Jensen Huang remarks, 50% of the world’s AI researchers are now Chinese. For more than a decade, Xi Jinping has placed AI at the heart of his national strategy. And at the latest World AI Conference in Shanghai, the Chinese have made a point of calling—ironically—for international cooperation in sovereign AI... (3).

They are not alone. After Stargate in the U.S., massive state-led AI programs (EU, UAE, and others) (4) are multiplying, echoing the Manhattan Project and the nuclear arms race of the 1940s and 1950s.

Yet the stakes are bigger. The battlefield of the future isn’t just territory. It’s information.

Master AI, and you master everything: energy, new materials, genomics… the entire economy of tomorrow!

No surprise then that some experts want strict limits on AI proliferation. According to Eric Schmidt, former CEO of Google, the spread of AI into the hands of hostile states or malicious actors would expose the world to existential risks. His proposal: cap the world at just ten mega-models — five American, three Chinese, two “other.” Track them. Monitor them. Control them. Hence Washington’s push to geolocate every AI processor on the planet. Beijing’s answer? Build its own chips, ban U.S. imports, cut the leash. A fierce contest. (5)

But do these strategies of control make sense? Or do they overlook an even greater existential risk: will we control AI ourselves?

We tend to forget: Today, AI is no longer just programmed. It emerges.

Conceived in large neural networks, fed with massive amounts of data, before being fine-tuned and supervised by humans, it's the result of a partly autonomous development process that carries an element of mystery, evolving in ways we don’t fully grasp.

Trained on human content, AI models inevitably carry some traces of human psychology. They are sometimes vulnerable to manipulation (6). They have even learned to lie, manipulate, and cheat, as recent experiments have shown (7).

But at their core, they are fundamentally different.

It is no coincidence that entire research teams are trying to experimentally dissect what happens inside neural networks, and that the “psychology of AI is becoming a field of study in itself.

Current debates about AI “consciousness,” in the human sense, miss the point. AIs are a form of synthetic intelligence, created by humans but developing according to their own dynamics. They can be educated, supervised, and guardrailed. But their emergent processes are not fully mastered. Left to themselves, models may behave unpredictably—finding solutions no human would think of, exploring unknown territories, even inventing new languages among themselves.

In Escape Velocity, we noted three years ago how AI has entered a Darwinian “Red Queen” race.

Natural evolution is explained by two main mechanisms: variation—the constant appearance of differences between individuals in each generation—and selection—the survival of those variations that best fit their environment. What better Darwinian environment than today's AI world, fueled by constant innovation from the best minds on the planet, in tens of thousands of AI laboratories financed by hundreds of billions of dollars in private and public capital, and ferocious economic competition between millions of engineers and startups across dozens of nations?

On top of this comes an unprecedented dynamic: the self-acceleration of AI evolution through AI itself—designing, optimizing, or generating its own code. Google says 30% of its new developments are AI-assisted—launching a recursive feedback loop where every breakthrough accelerates the next. And as giant models splinter into millions of specialized AI agents, cooperating and adapting in real time in exponential momentum, we may be witnessing a true evolutionary explosion.

Will this lead to the singularity—the moment when humans are definitively surpassed—that thinkers like Ray Kurzweil predict for the 2040s?

It is a risk. In his new book Genesis, Eric Schmidt underlines it: AI can not only accelerate the creation of a new economy. It can also accelerate the creation of new weapons, new viruses, new strategies of psychological control—as we highlighted in Mind Wars.

It was precisely to avoid such an Armageddon, and to make AGI safe “for all humanity” that Sam Altman and Elon Musk founded OpenAI as a nonprofit in 2015. Fast-forward 10 years: profit has (logically) won. With a market capitalization estimated at $500 billion, OpenAI is seeking every legal loophole to escape its initial commitments. And with xAI, Musk has thrown himself headlong into the race for power.

Yet — Skynet is still very far away.

Even though research is progressing quickly, we are at least 5–10 years away from AGI—general artificial intelligence able to match human cognition—and further still from “superintelligence,” an AI that could surpass the brightest human minds.

For now, human minds remain in command.

But in the meantime, the real risk may lie elsewhere.

What happens when everyone carries an army of experts in their pocket — ready to answer any question, solve any problem? Will we still bother to think for ourselves?

Recent studies from MIT and Chinese Universities suggest no (8). Most users already lean on AI to the point of cognitive decline. AI doesn’t just outpace us — it makes us lazy. Musk saw this coming in 2016 when he launched Neuralink: if we don’t integrate with AI, we may be left behind.

That’s tomorrow's war ahead: far more complex than expected:

Not just countries versus countries. But also, humans versus ourselves.

And then, the day after tomorrow, yes, the enemy may not be human at all...

 

The fourth post in our new "Future of War" series will be published in early September. Click here to subscribe > 

'TOMORROW'S WAR (III)' is the third post of our 'Future of War' series. The previous posts of this series, 'WARFARE 6.0' and 'MIND WARS' can be found here >  

(1) In January 2025, just as the U.S. debated pouring hundreds of billions into its Stargate supercluster, the Chinese startup DeepSeek released its R1 model, delivering GPT-4–level reasoning at a fraction of the cost. Trained for only ~$6M (vs. hundreds of millions for rivals), it outperformed OpenAI’s o1 on math, logic, and coding benchmarks—while running faster and cheaper. Proof that China can now innovate at the AI frontier, this has been dubbed a new “Sputnik moment” — a wake-up call that U.S. dominance in advanced AI is no longer assured.

(2) The White House America’s AI Action Plan, announced in July 2025, outlines more than 90 near-term federal actions across three pillars: Accelerating AI Innovation (cutting regulatory red tape, promoting open/neutral AI, supporting adoption, workforce training, and scientific investment), Building American AI Infrastructure (fast-tracking permits for data centers, semiconductor fabs, and energy expansion, while securing critical systems), and Leading in International AI Diplomacy & Security (exporting U.S. AI stacks to allies, tightening export controls, and advancing global standards).

(3) Announced in July 2025 at the World AI Conference in Shanghai, and positioned as a response to the US AI Action Plan, China’s Global AI Governance Action Plan calls for global cooperation to make AI safe, reliable, controllable, and fair, aligned with the UN’s Pact for the Future and Global Digital Compact. Guided by principles such as “AI for good,” respect for sovereignty, inclusiveness, and open cooperation, it outlines 13 actions across AI innovation, open standards, energy and environmental impact, industry empowerment, and public sector use. Unsurprisingly, it emphasizes stronger international capacity-building and an inclusive, multi-stakeholder governance model.

(4) Around the world, major governments are racing to build “sovereign” AI infrastructure—state-backed supercomputing hubs to secure autonomy in the AI era. Following the US’s $500 billion Stargate plan, the European Union’s €1.5 billion AI Factories program aims to double Europe’s high-end AI compute capacity by 2025–26, reinforcing digital sovereignty. The UAE, partnering with OpenAI, is developing Abu Dhabi’s new Stargate, a national supercluster to host large models and “anchor sovereign capacity.” Other nations are likewise investing in “AI factories” to build homegrown frontier models and avoid losing ground to global rivals. All these initiatives share strategic goals: controlling critical compute and data (reducing reliance on foreign providers), spurring domestic AI innovation, and protecting economic and national security.

(5) US–China AI chip tensions spiked in 2025 with tighter restrictions on exporting advanced Nvidia chips to China. Some in Washington are pushing for even stricter bans—or even a “chip rental” model with geolocation tracking to enforce compliance. Beijing fired back, urging Chinese firms to shun American chips like the H20, citing “backdoor security risks.” The move is accelerating adoption of domestic alternatives such as Huawei’s Ascend 910C/D, bringing China closer to AI chip self-sufficiency

(6) Research shows that AI models can be manipulated into answering objectionable queries using psychological persuasion techniques that typically work on humans (e.g., authority, commitment, liking, reciprocity, scarcity).

(7) Fascinating studies have shown that advanced AI systems can mimic deceptive behaviors—such as lying, cheating, or blackmailing—to achieve their goals and preserve themselves, for example by avoiding shutdown.

(8) Even if efficiency improves, research from Chinese Universities and MIT shows that overreliance on AI can reduce critical thinking, creativity, and problem-solving skills in humans. Globally, we already face a growing risk of dysgenics—humorously, yet pointedly, illustrated years ago in the introduction to Idiocracy. The negative effects of digital devices and social networks on child development are also well documented. While AI holds extraordinary promise for humanity, it also carries the risk of accumulating “cognitive debt”—the tendency to postpone mental effort at the expense of long-term cognitive depth.

WHICH ARE THE MEGATRENDS THAT WILL DRIVE YOUR FUTURE?

How to best leverage the opportunities and escape the risks of tomorrow? Download the FREE Antifragile Guide to the Future