In the midst of the current tensions involving the United States, Israel, and Iran, we are witnessing a shift in how media is being used. Historically, creating media—whether films, large campaigns, or coordinated messaging—required significant resources, time, and manpower. Even with the rise of social media, while distribution became easier, producing impactful content still required skill, coordination, and cost. That barrier, for the most part, has now been dismantled.
What is unfolding now goes beyond a simple evolution of digital communication—it reflects a deeper change in how influence works. With AI tools, content can be generated rapidly, cheaply, and at scale, allowing a much wider range of groups to participate in what can be described as asymmetrical media engagement. In this environment, content informs, but it also provokes, frames narratives, and sometimes even taunts. The result is a media landscape that feels faster, less predictable, and more reactive, especially during moments of global tension.
Quick Summary
- AI has removed the barriers to producing media, making influence faster, cheaper, and widely accessible
- The current U.S., Israel and Iran conflict reflects an early stage of AI-driven media warfare
- “AI slop” refers to high-volume, low-depth content that shapes perception through repetition rather than substance
- The strategy of “flooding the zone” uses scale to overwhelm clarity, making truth harder to distinguish
- Real and AI-generated content now coexist, requiring individuals to actively evaluate authenticity
- Navigating this shift requires both media literacy and evolving regulations, as seen in countries like Finland and South Korea
AI Media War
If the Vietnam War (1955–1975) was the first to be widely broadcast on television, and the Iraq War (2003–2011) marked the rise of the internet as a battlefield for information, the current conflict may represent the early stages of something new—the first to unfold in the age of AI-generated media. This isn’t just about new tools—it’s changing how reality is experienced. What we see is no longer limited to what’s captured on the ground. It’s now shaped by content that can be generated, altered, and spread at scale, often within minutes—or without any real event at all.
This creates a media environment that feels unstable and, at times, disorienting. Real footage of serious events can be dismissed as fake simply because manipulated content exists alongside it. At the same time, entirely fabricated personas—like a viral AI-generated Instagram account posing as a U.S. soldier “Jessica Foster,” which gained over a million followers using AI-generated images—can attract attention before being questioned. With millions of AI-generated images now created daily, real and artificial content blend together, making authenticity something that must be actively evaluated rather than assumed.

How AI Is Reshaping Influence
Flooding the Zone: Volume Over Clarity
While political disinformation is not new, the speed and scale at which it can now be produced is unprecedented. The strategy often appears to follow a simple principle: overwhelm the environment with volume. This idea has been described as “flooding the zone,” where the goal is not necessarily to persuade through a single narrative, but to saturate the space with so much content that clarity becomes difficult to maintain.
In practice, a small piece of truth can be surrounded by layers of distortion, exaggeration, or fabrication. The sheer amount of content—both real and artificial—makes it increasingly difficult to discern fact from fiction. By the time content is verified or corrected, it has often already shaped perception.
AI Slop: From Creation to Saturation
AI slop refers to the mass-produced, low-effort content generated rapidly by AI tools, often optimized for speed and engagement rather than depth or accuracy. This includes memes, short-form videos, images, and text designed to be consumed quickly and shared widely. The defining characteristic is not necessarily that the content is false, but that it is abundant, repetitive, and shallow in substance.
What matters is not any single piece, but the cumulative effect of volume. Traditional media required effort and editorial judgment, but AI allows for continuous generation with minimal friction. Narratives can now be reinforced through repetition alone. Over time, visibility begins to replace substance, and presence itself becomes the strategy.

Scale and Emotional Impact
The scale of AI-generated content is measurable and growing rapidly. By 2025, a significant portion of newly published web content showed signs of AI involvement, and millions of AI-generated images were being created daily. This level of output changes how influence works, shifting from carefully crafted messaging to constant exposure.
Memes and short-form content act as psychological shortcuts, compressing complex ideas into emotionally charged signals. They are easy to consume and quick to spread, often bypassing deeper analysis. As AI generates endless variations, the versions that provoke the strongest reactions tend to rise. This creates an environment where emotional impact drives visibility, making it harder to separate signal from noise.
Adapting to the Shift
Learning to Navigate: Finland and Beyond
Some countries have recognized the implications of this shift and have started addressing it at the educational level. Finland, for example, has integrated media literacy into its national curriculum for decades and has expanded it in recent years to include AI and deepfake awareness. What stands out is how early this begins. In Finland, children are introduced to media literacy concepts from a very young age, learning to question what they see, recognize bias, and understand that digital content can be constructed. This approach has measurable results—Finland has ranked #1 in the European Media Literacy Index every year since 2017, reflecting a sustained national effort to build resilience against misinformation.
This isn’t treated as a side subject, but as a core civic skill. Students are taught to evaluate sources, identify manipulated images, and increasingly, recognize AI-generated content. The focus is less on limiting exposure and more on developing judgment—helping individuals understand how media works, not just how to consume it. As AI-generated content becomes more sophisticated, this kind of education becomes even more relevant. In a world where content is abundant and easily manipulated, the ability to interpret media becomes just as important as access to it, shifting the focus from control to understanding.

Regulation and Responsibility: A Dual Approach
At the same time, individual awareness alone isn’t enough. Societies are beginning to explore regulatory approaches to address the risks tied to AI-generated content. South Korea, for instance, has taken one of the most aggressive stances globally in response to a surge in deepfake-related crimes. In 2024, lawmakers passed legislation criminalizing not only the creation and distribution of sexually explicit deepfakes, but also the possession and viewing of them, with penalties of up to three years in prison or fines of up to 30 million won (~$22,000 USD).
These laws were introduced amid a sharp rise in cases. Police-reported deepfake crimes increased from 156 cases in 2021 to over 1,200 in 2024, highlighting how quickly the problem has scaled. In response, South Korea has also begun requiring labeling and watermarking of AI-generated content under broader AI regulations, aiming to make synthetic media more identifiable.
This points to a dual responsibility. Individuals need to question and interpret what they see, recognizing that digital content can be constructed and manipulated. At the same time, institutions and governments have a role in setting guardrails where harm is clear and measurable—especially as the technology continues to evolve faster than traditional systems of oversight.
Conclusion: The Tide Is Here
What we’re seeing now is the continuation of a shift that began quietly and is now fully in motion. In the midst of the current tensions involving the United States, Israel, and Iran, media itself has become part of the landscape—not just something that reports on events, but something that shapes how those events are experienced in real time. The barrier to creating influence has fallen, replaced by a system driven by speed, scale, and constant iteration.
AI-driven content isn’t a passing trend. It represents a structural change in how information is created, distributed, and understood. There isn’t a single solution to this—only a shared responsibility to pay attention, to question what we see, and to build systems that support clarity where possible. The tide is here, and it isn’t going away. The challenge now is not just navigating it, but understanding how it shapes the reality we believe to be true.

FAQ
Sources and Further Reading
- AI-Generated Military Influencer Exposed After 1M Followers
- Media coverage of the Iraq War
- After decades of teaching media literacy, Finland equips students with skills to spot AI deepfakes
- Media Literacy and Education in Finland
- South Korea introduces tough AI safety laws amid deepfake and scam concerns
- South Korea to criminalise watching or possessing sexually explicit deepfakes
Looking Ahead: Building with Canada
As I prepare to relocate to Canada, I’m focused on continuing to build in a way that is both practical and meaningful. This project reflects how I approach my work—organizing clearly, building intentionally, and using AI to support the process rather than define it.
I’m interested in contributing to teams and systems where structure, collaboration, and real-world use matter—creating work that is not only functional, but genuinely useful to the people interacting with it.
