From Gaza to Ukraine, A.I. Is Transforming Modern Warfare

From Gaza to Ukraine, countries are increasingly relying on AI to process intelligence, guide drones, and accelerate combat operations, raising concerns about civilian harm and accountability.

In a significant report released in June 2025, UN Secretary-General António Guterres warned that AI is becoming a “domain-defining force” in the military sphere, fundamentally altering how armed forces gather information, conduct operations, and make life-and-death decisions.

The report, titled “Artificial intelligence in the military domain and its implications for international peace and security”, outlines how AI is already embedded across multiple layers of military operations.

According to the report, AI tools are being used for target analysis, generating strike recommendations at unprecedented speed. While this allows commanders to process vast amounts of data, the UN cautions that it raises serious concerns about proportionality and meaningful human oversight.

Artificial intelligence is no longer a futuristic military concept, it is already shaping how wars are fought, targets are chosen, and decisions are made. AI is also being used in autonomous navigation, with uncrewed systems capable of guiding final attack phases even under electronic jamming. The report notes that while this improves accuracy, it shifts critical judgement away from human operators.

Several governments have announced AI-driven defensive systems, including air defences that can autonomously detect, track, and intercept incoming threats. Meanwhile, AI-assisted ground robots are already deployed for reconnaissance, logistics, and combat-related tasks.

Drones and the New Face of Combat

Uncrewed aerial vehicles — from military-grade drones to modified commercial models have become central to modern warfare. On today’s battlefields, drones are used to scan terrain, identify targets with precision, and conduct strikes.

AI allows these systems to operate with reduced human input, including in swarm formations, where multiple drones coordinate autonomously. 

In defensive settings, AI-based systems are also being deployed for border surveillance, combining camera feeds, radar, and sensor data. As the UN report notes, modern warfare increasingly demands dominance not just on land, sea, or air, but in data and algorithms.

Gaza: AI-Assisted Targeting

A new kind of war has been unfolding in Gaza since 2023, one driven not only by missiles and drones, but by computer algorithms. A 2025 report by the Foundation for Political, Economic and Social Research (SETA), titled “Deadly Algorithms: Destructive Role of Artificial Intelligence in Gaza War,” highlights a troubling shift, revealing how Israel’s expanding use of artificial intelligence in military operations is transforming the conduct of warfare.

The report mentioned that Israel has turned Gaza and parts of the West Bank into a “digital prison” through constant monitoring, using AI-based facial recognition and surveillance tools like Blue Wolf, Red Wolf, and Wolf Pack. Israel’s surveillance systems were collecting biometric and behavioral data on Palestinians in real time. This included tracking social media activity, phone records, and daily routines. AI then used this data to help choose targets, turning daily life into a source of military intelligence.

According to multiple investigations, the Israel Defense Forces (IDF) have relied on AI-assisted systems to analyse surveillance data and generate large volumes of potential human and structural targets.

Marking a chilling shift in how wars are fought, Israeli intelligence gave a striking  illustration of  targeted killing back in 2020 when it assassinated Iranian nuclear scientist Mohsen Fakhrizadeh in a highly sophisticated operation. A remote-controlled, AI-assisted machine gun, operated from more than 1,000 miles away, opened fire on Fakhrizadeh as he was travelling in a convoy toward his home.

One early system, Habsora, was reportedly used during hostilities in Gaza as early as 2021. More recently, reports by +972 Magazine and a journal Eurasia Review, detailed the use of an AI program known as Lavender, developed by Israel’s elite Unit 8200.

According to +972 Magazine, Lavender designated as many as 37,000 Palestinians as suspected militants. A related system, called “Where’s Daddy?”, tracked individuals’ movements. Intelligence officers told the publication that human review was minimal.

“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval,” one intelligence officer told +972.

The report said the system was known to have an error rate of around 10 percent, meaning some individuals selected may have had no militant links.

The IDF has defended its use of AI. “The Israeli military uses AI to augment the decision-making processes of human operators. This use is in accordance with international humanitarian law,” said Magda Pacholska of the TMC Asser Institute, quoted by Eurasia Review.

The reports also explain how the IDF had previously used another AI system called “The Gospel,” which was used by the Israeli military to target buildings and structures suspected of harboring militants. According to reports, “The Gospel” draws on millions of items of data, producing target lists more than 50 times faster than a team of human intelligence officers ever could. It was used to strike 100 targets a day in the first two months of the Gaza fighting, roughly five times more than in a similar conflict there a decade ago.

Ukraine: A Live Testing Ground for AI Warfare

Technology has also been critical to Ukraine’s resistance since Russia’s full-scale invasion in February 2022. Facing a vast disparity in manpower and resources, Kyiv turned to AI-driven systems to compensate.

Ukraine has since emerged as a testing ground for advanced military AI, particularly in drone warfare. One such system is Swarmer, which allows a single operator to control multiple drones simultaneously.

“AI-powered drones can do in seconds what would take a human several hours, simply because we are slow to process a large volume of information,” said Serhii Kuprienko, founder and CEO of Swarmer, in an interview with Politico.

“The swarm is effective because one experienced drone pilot can work effectively with dozens of drones at the same time.”

The role of uncrewed systems was formally recognised in February 2024, when President Volodymyr Zelenskyy launched Ukraine’s Unmanned Systems Forces, a dedicated military branch for aerial, ground, and maritime drones.

Ukraine’s Deputy Prime Minister and technology chief Mykhailo Fedorov earlier said AI enables drones to strike even without communication links.

“AI can help lock in on targets and then automatically — without communication, or in conditions of suppression by the enemy’s electronic warfare systems — make it possible for the drone to hit the target,” he said in 2024.

Russia, meanwhile, has used AI-enabled systems to jam Ukrainian drones, disrupt satellite navigation, and deploy kamikaze drones capable of autonomously tracking targets. Moscow has also developed tactical systems like Svod, which fuse satellite and battlefield data.

Global AI Militarisation

Other major powers across the globe are also accelerating AI integration into their military power.

China has openly embraced a strategy it calls “Intelligentized Warfare”, testing drones capable of completing attack missions even after losing contact with operators. Taiwan’s leadership has cited lessons from Ukraine while pledging to expand its own drone capabilities.

The United States has expanded the use of AI under Project Maven. According to a February 2024 Bloomberg report,  US forces used AI tools to help identify targets for strikes in Iraq, Syria, and Yemen following the October 2023 Hamas attacks.

India and Pakistan during last year’s crisis also heavily relied on deploying drones in cross-border operations, including loitering munitions and kamikaze drones. The 2025 India–Pakistan conflict marked a turning point in South Asia’s information battles, becoming the first major confrontation in the region where AI-generated content significantly shaped public perception.

One of the most prominent examples was a deepfake video of Pakistan’s Prime Minister Shehbaz Sharif that circulated widely online, falsely portraying him as conceding defeat and expressing frustration over the absence of support from China and the UAE. In reality, the original footage showed Sharif praising the Pakistan Air Force for its response to India’s Operation Sindoor.

UN Warns of Civilian Risk

Humanitarian organisations and UN agencies have repeatedly warned that AI could increase civilian harm and blur responsibility.

“Humanity’s fate cannot be left to an algorithm,” Guterres told the UN Security Council in September 2025. “Humans must always retain authority over life-and-death decisions.”

In 2025, the United Nations identified three broad categories of risk associated with the growing military use of artificial intelligence.

1. Technological risks: The United Nations Institute for Disarmament Research (UNIDIR)  warned that “if an AI system has not encountered a certain scenario in training data, it may respond unpredictably in the real world,” adding that biased algorithms could misidentify civilians as combatants.

2. Security risks: AI has the potential to accelerate the pace of conflict. The UN Office for Disarmament Affairs (UNODA)  cautioned against the emergence of “flash wars,” in which algorithm-driven escalation intensifies a crisis faster than humans can intervene. 

3. Accountability also becomes blurred. While international law holds states and individuals responsible for military actions, the Secretary-General’s June 2025 report noted that AI may “obfuscate the linearity of this process,” complicating the attribution of responsibility.

Alongside these warnings, global efforts to regulate military AI have intensified. Key initiatives include the Responsible AI in the Military Domain (REAIM) Summit, 2023 that brought together more than 60 countries to discuss the responsible development and use of AI in military contexts. The 2024 REAIM summit, held in Seoul, South Korea, also produced a “Blueprint for Action,” emphasising human-centric AI development and human control in decisions related to nuclear weapons.

UN Secretary-General António Guterres has also renewed his call for a global ban on lethal autonomous weapon systems, machines capable of taking human lives without human oversight—describing them as “politically unacceptable” and “morally repugnant.” He has urged member states to establish clear regulations and prohibitions on such systems by 2026.

As the experts warn, future conflicts will be shaped as much by data dominance and cognitive warfare, governments globally must regulate a technology that is already outpacing international law.

Author

Leave a Reply

Your email address will not be published. Required fields are marked *