
The Iran war has highlighted how AI is becoming part of modern conflict, both on the battlefield and in military planning. The latest U.S.-Israeli strikes on Iran also showed that cyber tools now move alongside air power, adding a digital layer to war that can shape communication, targeting and response.
The debate over AI in war has sharpened as governments move faster than international rules. Academics and legal experts in Geneva are discussing lethal autonomous weapons and military AI buying rules as part of a broader effort to set legal and ethical limits.
University of Pennsylvania political scientist Michael Horowitz said military technology is advancing much faster than global talks. Newcastle University researcher Craig Jones warned that the lack of agreed rules suggests AI warfare could spread quickly.
AI expands its role on the battlefield
The U.S. military already uses AI built on large language models for logistics, intelligence analysis and battlefield support, Horowitz said.
Israel, US, and Iran war is defined by AI
Xing Yun Yen, a political adviser in Beijing, has warned that China may be making a strategic mistake by focusing its AI development mainly on industrial applications and media content.
Meanwhile, the United States is integrating AI… pic.twitter.com/mrKaNNYnXx
— SCOTT (@scottbolshevik) March 6, 2026
One system, Maven Smart System, uses AI for image processing and tactical support and can help speed attacks by suggesting and ranking targets. Reports have linked Maven to earlier wars and to the strikes on Iran, though many details are not public.
Some officials and analysts say AI could improve precision and lower civilian harm. But conflicts in Ukraine and Gaza, where AI has supported target identification and drone navigation, have still produced heavy civilian casualties. Jones said there is no proof that AI lowers civilian deaths or wrongful strikes and that the opposite may be true.
Policy disputes and the rise of AI in the Iran War
The sharpest concern is over fully autonomous weapons. Militaries may value drones that can identify and strike targets on their own. But current AI systems are not reliable enough to meet international legal standards without human oversight, Horowitz said.
That concern also appeared in a dispute between the Pentagon and Anthropic. Since 2024, Anthropic’s Claude model had supported Maven under a $200 million contract. In January, the Defense Department said government AI contracts must allow use for any lawful purpose.
Anthropic refused to remove safeguards, saying its model should not be used for mass domestic surveillance or fully autonomous weapons. On Feb. 27, President Donald Trump ordered the government to stop using Anthropic technology.
The government later signed a deal with OpenAI, which said its tools also would not be used for surveillance or fully autonomous weapons. As of March 7, Anthropic CEO Dario Amodei was reportedly back in talks with the department.
Cyber operations add a new layer to the Iran war
Cyber operations also shaped the Iran war. Reports said digital attacks accompanied the Feb. 28 operation. A hack of the BadeSaba religious app sent anti-government messages to users, and Iranian news sites were also compromised.
U.S. officials later said cyber and space operations disrupted Iranian communications and sensor networks.
Analysts say Iran is likely to answer in cyberspace. Iran has long used cyber operations and proxy actors, and experts warn that sectors such as finance, water and transportation could face higher risk as conflict grows.
See all the latest news from Greece and the world at Greekreporter.com. Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!


