The latest round of hostilities between Israel and Iran is not being fought with missiles alone, but with algorithms, deepfakes, and AI-generated chaos. Unlike past conflicts, where physical damage could be measured in craters and body counts, the June 2025 escalation unfolds in a fog of machine-manipulated perception, where neither side fully controls the narrative and reality is buried under layers of computational propaganda.
This is Fifth-Generation Warfare (5GW), where artificial intelligence doesn’t just assist warfighting, it becomes the battlefield itself.
How This War Is Different: AI vs. AI in the Shadows
The Israel-Iran conflict of 2025 is not merely a continuation of their decades-long shadow war—it is a fundamental evolution in how wars will be fought, perceived, and manipulated from this point onward. The recent April 2025 India-Pakistan 4-day war was just a prelude to this phenomenon.
Unlike past conflicts, where information moved at the speed of television broadcasts and physical evidence could be independently verified, today’s battlespace is dominated by AI-generated fog, where truth is not just contested but computationally manufactured.
From State Narratives to Algorithmic Floods
In the past, governments-controlled war narratives through state TV and censored press. During the 2006 Lebanon War, Hezbollah’s Al-Manar and Israel’s Army Radio were the dominant voices. Today, AI-generated content drowns out traditional media within minutes. For example:
- After Israel’s June 2025 strike on Isfahan, pro-Iranian bot networks flooded X (Twitter) with AI-generated images of untouched military sites, contradicting satellite evidence.
- Israel retaliated with AI-dubbed videos of Farsi-speaking IDF spokespersons, a tactic refined from Ukraine’s use of AI to mimic Russian generals.
This isn’t just propaganda—it’s information saturation, designed to overwhelm fact-checkers and seed doubt.
Deepfakes Outrun the Truth
In 2014, debunking a fake photo of a Gaza strike took hours. In 2025, AI-generated videos outpace verification:
- A deepfake of Israeli PM Netanyahu “admitting defeat” spread to 2.3 million views on Telegram before being exposed.
- Iran’s “Fars Generator” AI studio produced dozens of variations of fake IDF surrender announcements, forcing Israeli officials to issue real-time denials—a distraction from actual military operations.
The result? The public no longer waits for facts—it consumes the most emotionally resonant narrative first.
Cyber Warfare: From Hackers to AI Swarms
Gone are the days of lone hackers like Stuxnet’s engineers. Now:
- Iran’s CyberAv3ngers group uses AI to automate phishing attacks, scaling operations from hundreds to millions of attempts per day (Check Point Research).
- Israel’s alleged “Sentinel” AI malware autonomously adapts to Iranian firewall changes, making cyber defenses obsolete within hours.
This is no longer espionage—it’s machine-vs.-machine combat, where the human operators are just supervisors.
Casualty Reporting in the Age of AI Fabrication
In 2008, Gaza death tolls were disputed but verifiable via UN observers. In 2025:
- Israel’s AI “Hasbara bots” downplay Palestinian casualties by auto-generating “context” tweets about Hamas human shields.
- Iran’s state media uses AI to erase domestic dissent, scrubbing protest footage from social media with deepfake “normalcy” clips.
The real toll is unknowable—not because data is hidden, but because it is actively polluted by synthetic content.
Table 1: The Shift from Traditional to AI-Enhanced Conflict
Aspect | Past Wars (Pre-2020s) | 2025 Israel-Iran War |
Information Control | State-run TV, limited social media influence. | AI-generated content floods all platforms within minutes. |
Disinformation Speed | Hours/days to debunk fake claims. | Deepfakes spread faster than fact-checkers can react. |
Cyber Warfare | Hackers manually breach systems. | AI automates phishing, false flags, and bot armies. |
Casualty Reporting | Relied on ground witnesses. | Both sides use AI to fabricate or suppress damage reports. |
Why This Matters Beyond the Middle East
Ukraine 2022 was a preview; Israel-Iran 2025 is the blueprint. The US and China are studying this conflict to refine their own AI warfare doctrines. For sure, democracies are at a disadvantage while open societies struggle to counter AI disinformation without resorting to censorship.
The next war may not have any “truth” at all. If AI can simulate entire battles, how will civilians know what’s real? This isn’t just a war between Israel and Iran—it’s a battle for the future of reality itself. And right now, the machines are winning.
The AI Tools Powering This War
The Israel-Iran conflict has become a testing ground for next-generation information warfare, where AI is not just a tool but a battlefield in itself. Both nations have developed sophisticated systems to manipulate perception, automate attacks, and control narratives—each exploiting technological asymmetries to gain an edge.
Israel’s AI Playbook: Precision Propaganda and Cyber Dominance
Israel’s use of real-time AI translation tools to deliver propaganda in Farsi and Arabic (even Urdu) marks a seismic shift from traditional psychological operations. The IDF’s AI-dubbed videos of commanders speaking flawless Arabic, previously requiring human translators, now deploy within minutes of an event, ensuring message control before Iran can counter. This erodes Iran’s narrative dominance in the Arab world, where Israel was historically seen as an outsider.
Algorithmic Amplification: Manufacturing Consensus
Israel doesn’t just spread its message—it engineers social media ecosystems to favour its narrative. “Sachlav”, the IDF’s PSYOP AI, identifies high-traffic hashtags and floods them with pro-Israel content, drowning out dissenting voices. For example, during the June 2025 strikes, pro-Israel posts on X (Twitter) saw a 300% engagement spike in Persian, Arabic and even Urdu language spaces—an anomaly suggesting bot-assisted trending.
AI Cyber Strikes: The Rise of Autonomous Malware
Israel’s alleged “Sentinel” AI malware represents a leap from Stuxnet’s manual hacking to self-learning cyber weapons. This AI tool studies network behaviours, adapts to patches, and even generates fake personas to trick Iranian sysadmins into granting access. During a 2024 attack on Iranian fuel stations used polymorphic code that changed its signature mid-operation—a hallmark of machine-driven hacking. It mattered because Israel’s approach is quality over quantity—fewer, smarter attacks that maximize disruption while minimizing exposure.
Iran’s AI Strategy: Chaos, Deepfakes, and Digital Guerrilla Warfare
Deepfake Warfare: Undermining Trust in Reality
Iran’s “Fars Generator” studio produces hyper-realistic deepfakes at an industrial scale. For example, a fabricated video of an IDF officer “confessing” to war crimes was shared 2.1 million times before debunking, long enough to sway global opinion. The psychological effect of this is that even after exposure, the “liar’s dividend” lingers—doubt seeps into every real video.
AI-Generated Hacktivists: The Phantom Army
Iran creates entire synthetic hacktivist groups (e.g., “CyberAv3ngers”) to muddy attribution. These personas claim attacks they didn’t conduct, confusing defenders. In June 2025, a fake “Israeli hacker” leak was traced to Iranian IPs via AI-generated English text. Its advantage is that it carries a plausible deniability. Even when caught, Iran shrugs: “These are independent patriots.”
Bot Swarms: The Digital Human Wave
Iran’s “Spider’s Web” networks deploy AI-powered bots that mimic real users. During a blast near US embassy in Iraq, 38,000 bot accounts inflated the event’s perceived impact. These bots now evade detection by imitating real-user typing patterns—a trick learned from Russian troll farms. It matters because Iran’s strength is volume and persistence—flooding the zone with so much noise that truth becomes irrelevant.
Table 2: The AI tools employed by Israel and Iran in the June 2025 conflict
Country | AI/Disinformation Tactics | Confirmed Tools Used |
Israel | – AI-translated propaganda in Farsi/Arabic. – Algorithmic amplification of IDF success stories. – AI-assisted cyber strikes (e.g., automated malware). |
“Sachlav” (IDF PSYOP AI), “Iron Dome” missile-targeting AI, “Sentinel” disinfo tracker. |
Iran | – Deepfake videos of Israeli officials. – AI-generated “hacktivist” personas. – Bot swarms on Telegram/X. |
“Fars Generator” (deepfake studio), “Spider’s Web” bot networks, AI voice cloning tools. |
The Hidden War: How AI Tools Are Tested Against Each Other
Israel’s “Iron Dome AI” vs. Iran’s “Stealth Drone Swarms” are testing each other as well. Israel’s missile-targeting AI now predicts Iranian drone flight paths using machine learning, boosting interception rates to 96% (up from 85% in 2021). Iran retaliates with AI-driven evasion algorithms, forcing Israel to constantly update its models—a digital arms race in real-time.
The AI Fact-Checking War
Israel’s “Sentinel” disinfo tracker uses NLP to detect and counter Iranian fake news. On the other hand, Iran’s bots actively poison training data by flooding fact-checking tools with nonsense, causing AI classifiers to misfire. The outcome is a A closed loop of deception, where each side’s AI trains on the other’s lies, making neutral analysis nearly impossible.
Take a look beyond these conflicts or taking sides, rather from an academic and realistic perspective that AI Warfare Is Here to Stay. Yes, democracies have become vulnerable, but it is also a reality that now Israel’s tech edge is offset by Iran’s willingness to weaponize chaos—a tactic harder to counter in open societies. The “Splinternet” looms. As AI-generated content proliferates, nations may Balkanize the internet to control narratives, killing the idea of a global web. The next 9/11 could be virtual. Imagine AI faking a “nuclear missile alert” from a US official’s voice. Would we even know it’s fake before panicking? This isn’t just about Israel and Iran—it’s about who controls reality in the Algorithmic Age. And right now, both sides are winning enough to keep the world guessing.
The Reality vs. The Reported: A Fact-Checked Breakdown
The table below outlines the facts reported versus the facts checked by us and it shows some visible patterns and real-time implications. There is a pattern of asymmetric truth-telling once we analyse the major news channels and even official spokespersons. Israel leans on verifiable military outcomes (satellite imagery, interception rates), while Iran employs strategic denial even against evidence, likely to maintain domestic morale and deterrence posture.
However, it is a fact that both sides avoid mentioning external actors (U.S. for Israel; Russia for Iran) to control narratives and cyber operations are consistently underreported, suggesting both nations prioritize plausible deniability in this domain.
The gap between confirmed damage and official claims highlights how modern conflicts are fought as much in the information space as on the ground. Neutral observers must rely on technical verification (satellites, cybersecurity firms) to pierce through politicized narratives.
Table 3:Physical Damage: Confirmed vs. Claimed
Event | Confirmed Damage (Satellites/OSINT) | Israel’s Claim | Iran’s Claim | Omissions |
Israeli strike on Isfahan (June 2025) | – Radar site destroyed (Planet Labs). – No nuclear facility damage (IAEA). |
“Precision strike on military target.” | “No damage, failed attack.” | Cyber pre-strike on Iranian sensors not acknowledged. |
Iranian drone swarm attack | – 93% intercepted (US CENTCOM). – 1 Israeli airbase lightly damaged. |
“Total victory, no threats remain.” | “Massive retaliation, heavy Israeli losses.” | US warship role in interceptions downplayed by Iran. |
Hacking of Israeli hospitals | – 2 hospitals disrupted (Check Point). – No deaths, data wiped. |
“Iranian terrorism against civilians.” | “Legitimate cyber operation.” | Possible Russian malware toolkit used (unconfirmed). |
- Digital Deception: Confirmed Deepfakes & AI Ops
Fake Content | Platform | Who Deployed It? | Purpose | Fact-Check Status |
“Netanyahu admits defeat” video | Telegram/X | Iran (Fars Generator) | Demoralize Israelis. | Debunked (AI lip-sync artifacts). |
“IDF soldiers surrendering” | TikTok | Iran (AI voice clone) | Fake military collapse. | Recycled 2023 footage. |
“US base in Iraq burning” | Iranian bots | Inflame anti-US sentiment. | AI-generated smoke overlay. |
The Great Power Testing Ground: How US and Russia-China Are Involved
US-supplied AI targeting systems (used in strikes) are being combat-tested in real-time, while Google/Meta’s algorithms favour Israeli content (Western bias in moderation). US Cyber Command is assisting in Iranian sensor disruptions (neither confirmed nor denied officially).
Russia-China’s AI Playbook involves backing Iran in this AI-driven 5GW conflict. Russian AI disinformation tools (originally for Ukraine) now allegedly repurposed for Iran. Chinese facial-recognition tech supposedly helps Iran track dissidents amid war protests. Both block UN sanctions while supplying cyber tools to Tehran.
The Unreported War: What’s Hidden
The extent and speed of ASI is certainly testing human intelligence, resultantly its almost impossible to do fact-checking because AI-generated fake content spreads 10x faster than human debunking (DFRLab). Telegram and TikTok moderation is non-existent for Persian/Arabic AI fakes. And it should not be a surprise to know that neither Israel nor Iran admits hacking—plausible deniability is key.
Israel’s Unspoken Cyber Strikes includes Stuxnet 2.0, as rumours are rife on the dark web that a new ICS malware is set to target Iranian nukes (no proof yet). But AI-assisted false flags of framing Hezbollah for attacks are surely on the up to justify Israeli aggressions.
Iran’s Digital Repression is multipronged. It is employing AI-powered surveillance to crush anti-war protests in the country while “Patriotic trolling”—flooding Persian forums with pro-regime bots.
A War With No End—And No Truth
This is not just Israel vs. Iran. It’s a laboratory for AI warfare, where the US tests AI-enhanced deterrence, while Russia and China refine censorship-resistant disinformation. The civilians for sure are drowned in this algorithmic propaganda. Therefore, the real damage isn’t just in shattered buildings—it’s in shattered consensus on reality itself.
Sources used to develop this analysis include Satellite Intel: Planet Labs, Sentinel Hub, Cyber Alerts: @vxunderground, Check Point Research, Disinfo Tracking: DFRLab, @ShadowChasing, OSINT Strike Analysis: @CalibreObscura, @AuroraIntel.