The race to regulate AI in warfare
In today’s Finshots, we explain why the use of AI doesn’t have guardrails like nuclear weapons and biological warfare.
But here’s a quick sidenote before we begin. This weekend, we’re hosting a free 2-day Insurance Masterclass that helps you build real financial security by understanding health and life insurance the right way.
📅 Tomorrow (Saturday), 7th March at 11:00 AM: Life Insurance
How to protect your family, choose the right cover amount, and understand what truly matters during a claim.
📅 Sunday, 8th March at 11:00 AM: Health Insurance
How hospitals process claims, common deductions, the mistakes buyers usually make, and how to choose a policy that won’t disappoint you when you need it most.
👉🏽 Click here to register while seats last.
Now onto today’s story.
The Story
Last week, Anthropic stepped back from a $200 million deal with the United States Department of Defense over a long-standing dilemma — how far AI should be allowed to go in warfare. The company insisted on strict limits preventing its AI, Claude, from being used for mass surveillance or fully autonomous lethal weapons.
Their argument?
Such systems should never make kill decisions without meaningful human oversight.
When the Pentagon pushed for broader access that would allow the technology to be used for any lawful military purpose, Anthropic refused and exited the negotiations. Rival lab OpenAI later moved ahead with a separate deal with the Pentagon. This highlighted a growing divide in Silicon Valley over whether AI companies should impose hard ethical red lines on military use of their technology.
At first, this might look like just another corporate disagreement between tech companies and the military. But it touches on a much bigger question: who decides the rules when a powerful new technology enters the battlefield?
You see, for most of modern history, whenever humanity created a particularly dangerous weapon, the world eventually responded by writing rules around it.
The suffering of wounded soldiers on European battlefields in the 19th century eventually led to the first Geneva Convention protecting both injured soldiers and medical personnel. The horrors of trench warfare and chemical attacks during World War I pushed countries to adopt agreements restricting chemical weapons. And after the widespread civilian suffering and prisoner abuse during World War II, nations came together to create the modern Geneva Conventions. They are a set of international rules that define how wars should be fought, including protections for civilians, prisoners of war and the wounded.
It was a pattern as clear as day: whenever warfare evolved, so did the rules around it.
These rules didn’t stop wars. They simply established guardrails for how they could be fought — preventing countries from using any and all means against one another.
But today, a new military tech is emerging that doesn’t neatly fit into any existing rulebook: artificial intelligence.
Imagine a battlefield command centre flooded with information — satellite images streaming in, drone footage updating every second, radio intercepts piling up faster than analysts can process them.
Now imagine an AI system sitting in the middle of it all.
Within seconds, it scans the imagery, flags suspicious movements, highlights possible targets and suggests the next course of action. Tasks that would normally take human analysts hours or even days, can suddenly happen almost instantly.
Militaries around the world are already experimenting with systems like these.
And that’s where the real debate begins.
Because some of the technologies being developed today go a step further. Instead of merely assisting commanders, they could eventually identify and attack targets independently. Researchers refer to these systems as Lethal Autonomous Weapon Systems, or LAWS — weapons that could potentially select and strike targets without direct human control.
Over a year ago, the United Nations addressed the dangers of LAWS in modern warfare. Their concern is how AI systems could eventually be used to identify and attack targets without direct human intervention. In other words, the system itself would decide who is a threat and who isn’t.
In policy circles, these technologies are sometimes referred to as “killer robots”. The term sounds dramatic, but it simply refers to weapons that could use artificial intelligence to select and strike targets on their own.
That’s where things start getting complicated. Because the laws of war were written with the basic assumption that humans make battlefield decisions.
We’ll explain this with a scenario. If a soldier violates the rules of war, responsibility is usually clear. The individual who carried out the attack can be investigated. Commanders who gave unlawful orders can be held accountable.
But autonomous weapons complicate that idea of responsibility.
If an AI system misidentifies a target or causes civilian casualties, who exactly is to blame? The military commander who deployed the system? The engineers who wrote the algorithm? Or the machine itself?
It’s a question the current rulebook doesn’t clearly answer.
That’s why they’ve been debating whether the world needs new rules for autonomous weapons. From now to September, over 120 countries will discuss new rules and regulations of LAWS.
Even the chair of the negotiations has warned that time is running out. If governments wait too long, he said, technological developments could overtake the rules meant to control them. Unlike nuclear or chemical weapons, AI doesn’t rely on rare materials or specialised factories. Much of it is software. And software tends to spread quickly.
But agreeing on it is easier said than done. Here’s why.
Right now, there are three broad ideas on the table.
The first is a complete ban on autonomous weapons altogether. This would mean that machines on their own wouldn’t be allowed to operate independently. Especially for life-or-death decisions in war. It would follow the same logic the world eventually applied to chemical and biological weapons.
And then there’s the second possibility: do nothing new at all. Some governments argue that existing laws of war already cover this technology. The same rules that apply to any weapon today, including the protections laid out in the Geneva Conventions, should apply to AI systems as well.
The third option is a middle ground. Instead of banning these systems outright, countries could require what experts call “meaningful human control”. In practice, that would mean AI systems could analyse battlefield data, suggest targets or recommend actions. But the final decision to launch an attack would always rest with a human commander.
Which brings us back to Anthropic.
The company was founded by former OpenAI researchers who believed powerful AI systems needed clear guardrails. That philosophy also shaped its decision to step away from the Pentagon deal.
Anthropic wasn’t opposed to working with the military. But it was unwilling to remove restrictions that could allow its AI to be used for mass surveillance or fully autonomous weapons.
In many ways, that’s the same question the rest of the world is now grappling with: where exactly should the guardrails for AI in warfare be drawn?
For centuries, the pattern has been clear. New weapons emerge, wars reveal their dangers, and only then does the world write the rules.
Artificial intelligence may be the first time humanity is trying to write those rules before disaster forces its hand. Unlike traditional weapons, which are controlled by whoever wields them, AI systems still carry safeguards designed by their creators. At least for now, those guardrails remain in the hands of the companies building the technology.
Until then…
If this story made you feel like Skynet shouldn’t become a reality, share it with your friends, colleagues or even strangers on WhatsApp, LinkedIn and X.