Cognitive Warfare: will our minds be the 21st century battlefield?
Jul 21, 2025
We know the adage: In war, the first victim is truth.
But what if the manipulation of truth was actually the ultimate weapon?
Three thousand years ago, Sun Tzu already emphasized this in The Art of War: psychology is key to gaining the upper hand. We've seen it throughout history, rich with countless PsyOps to manipulate enemy crowds... or friendly ones! (1)
In the era of returning kinetic warfare, killer drones, and hypersonic missiles, might psychological operations lose importance?
What if, on the contrary, psychological warfare has never been so essential, now that new technologies allow for hyper-targeted influence—not on the masses, but on each individual?
We all remember the Cambridge Analytica scandal that broke ten years ago.
At its core: alleged electoral manipulation during Trump's first winning campaign, and then Brexit, by a political consulting firm—actually a subsidiary of a defense contractor specializing in PsyOp—that used big data and personalized psychographic analyses to launch political influence campaigns on Facebook.
In retrospect, the whole affair was grossly exaggerated by the media. The technology at that time wasn't really mature, and subsequent investigations showed that the operations' influence could only have been minimal, if not nonexistent (2).
But what if we were just now reaching this technological maturity?
Weak signals are piling up:
- With generative AI, manipulation tools are now multimodal. Text, audio, video—they can fake or distort any type of content with uncanny realism (3).
- These tools are smart: they can hold conversations, reason independently, and mimic human behavior so well they've started passing Turing tests (4).
- They're autonomous: with agent-based technologies, they can now operate at scale—even in swarms—targeting thousands or millions simultaneously.
This opens revolutionary perspectives: What if the real battlefield of tomorrow lies inside our minds?
- In the developed world, never have individuals been so connected, transparent, and thus vulnerable to influence operations.
- Never has technology been so capable of hyper-personalized adaptation to each person's needs and deep psychology.
- Never have states—or even political movements—had such means to manipulate our cognitive environment, even creating alternate realities.
Ten years ago, it became possible to predict an individual's behavior through their digital footprints.
Today, it's not far-fetched to imagine that Google, Meta, or Baidu may sometimes know more about us than we consciously know about ourselves.
The potential impact is enormous.
Much has been said about the recent Israeli PsyOps operation during the 12-day war: making personalized ultimatum calls to dozens of Iranian officers, threatening them and their families with assassination if they didn't publicly disavow the regime and flee immediately.
A brilliant operation. But imagine this operation automated by AI, multiplied tens or even hundreds of thousands of times in parallel, hyper-personalized according to the individual psychology and family environment of each target?
Today, this is already technically possible.
And this is likely the kind of deep military revolution we should be expecting—quietly brewing beneath the noise about drones and space wars.
Everywhere, PsyOps are at the heart of reflections on tomorrow's battlefield and sixth-generation warfare. To the point that the cognitive domain is now considered the 6th battlefield, after land, sea, air, cyber, and space.
Everywhere, military investments are multiplying to influence minds:
- Divide and weaken adversary populations before war.
- Demotivate and disorganize the enemy during conflicts.
- Deconstruct narratives and rewrite the history of the vanquished after the war.
Knowing that in psychological warfare, the offensive will always have the advantage.
What future does this promise us?
A future where we are no longer sure of reality!
Already, as Elon Musk put it, "If you talk to someone who only gets their information from legacy media, they're living in an alternate reality."
Imagine tomorrow's world with Neurotech, where AI could have direct control over some of our senses. An infinite world of cognitive bubbles that could create as many mental prisons as there are individuals, and completely govern our lives.
AI will be at the heart of this war.
It's only just beginning. And it could make 1984 look like a quaint fairy tale...
In today's context of rising tensions, calls for fighting spirit—or, conversely, resistance—are multiplying everywhere.
To survive the future—and for those who still want to remain masters of their own fate—it may be more critical than ever to guard the last frontier: Our own minds.
The third post in our new "The Future of War" series will be published mid-August. Click here to subscribe >
'MIND WARS' is the second post of our 'The Future of War' series. The previous post of this series, 'WARFARE 6.0' can be found here >
(1) We know how PsyOps have always been a major part of wars to deceive and weaken enemies, in one-off operations like the Allied "Operation Fortitude" to mislead Nazi Germany about the D-Day landing site, or long-term efforts like the KGB's programs to undermine the West during the Cold War, revealed in the 1980s by defector Yuri Bezmenov. We often forget that PsyOps also target allied populations to justify entering wars (via false flag attacks, like the fake Polish attack used by Germany to justify invading Poland), fabricate war crimes (like the Kuwaiti-Hill&Knowlton orchestrated fake news about the baby incubator massacre to secure U.S. support after Iraq's invasion of Kuwait), or manufacture imaginary threats (like the iconic propaganda about Iraq's supposed "weapons of mass destruction" during the Second Gulf War).
(2) In 2016, Cambridge Analytica, a newly founded subsidiary of a 'cyberwar private contractor' tied to the British and American military-industrial complex (Strategic Communication Laboratories), promised to revolutionize political communication by leveraging the big data expertise of its main shareholder—billionaire financier Robert Mercer, who was closely linked to the alt-right. After modest beginnings in the 2016 Republican primary through psychographic ad targeting on social media, using data obtained from Facebook by a Cambridge researcher, Cambridge Analytica capitalized on Trump's victory to claim a breakthrough in influence communication. In 2018, it faced a massive, orchestrated media campaign on both sides of the Atlantic (The Guardian and The New York Times), fueled by 'revelations' from a former employee—presented as a whistleblower but actually a disgruntled competitor—who accused it of rigging the election. Exploited by both data privacy advocates and Democratic and British 'Remain' political networks trying to challenge the validity of Trump's election and Brexit, what was just a minor feud between two political communication firms spiraled into a global scandal, costing Facebook $5 billion. As later analyses showed, the technology used was far from mature and had no significant electoral impact at the time. But it revealed to the public practices that have since gone underground—and are now far more effective.
(3) While easily detectable a few years ago, deepfakes (AI-generated synthetic media that convincingly mimic real people's voices or videos) have achieved a high degree of realism in recent months. Virtual TV presenters are now routinely used on sales channels, particularly in Asia. A growing number of companies encourage executives to create their own virtual replicas to amplify their online presence. Unsurprisingly, deepfakes are increasingly exploited by cybercriminals for financial fraud, identity theft, extortion, and social engineering, leveraging their hyper-realistic nature to bypass authentication systems or deceive individuals, with real-world scams costing companies tens of millions of dollars. Analysts, such as Gartner, predict that by 2028, combating disinformation may consume 50% of marketing and cybersecurity budgets. Needless to say, deepfakes are now very commonly used in psyops by state actors, political groups, or malicious entities to spread disinformation, undermine trust, or sway public opinion.
(4) The Turing Test, proposed by Alan Turing in 1950, evaluates whether an AI can mimic human conversation convincingly enough to be mistaken for a human. While several AIs have demonstrated human-like performance for over a decade, ChatGPT 4.5 and LLaMA 3.1 officially passed the Turing Test in March 2025, being identified as human in 73% and 56% of interactions, respectively. Additionally, Grok 4 Heavy recently scored 44.4% on Humanity's Last Exam (HLE) with tool use, leading some analysts to claim that AI now 'surpasses PhD-level expertise' across multiple disciplines, suggesting we are closer to achieving Artificial General Intelligence (AGI) than ever before.