On October 7, 2023, within hours of the Hamas attack on Israel, two entirely different wars began — one on the ground in southern Israel, and one on your phone screen. And depending on which algorithm was feeding you content that day, you may have seen almost nothing of one of them.
This is not a conspiracy. It is a system — built by engineers in California, optimized for engagement, and utterly indifferent to the difference between a verified atrocity and a fabricated one. The algorithm does not have politics. It has metrics. And the metrics reward outrage, confirmation, and emotional intensity regardless of whether the content producing those reactions is true.
The result is that modern warfare is now fought on two fronts simultaneously — the physical battlefield and the information environment. And on the second front, the algorithm is the most powerful weapon in the arsenal.
The Engagement Trap
Every major social media platform — Facebook, X, TikTok, YouTube — uses recommendation systems designed to maximize the amount of time users spend on the platform. These systems are trained on behavioral data: what you click, how long you watch, what you share, what makes you stop scrolling.
The data is unambiguous about what drives engagement. Content that provokes strong emotional responses — anger, fear, moral outrage, tribal solidarity — outperforms content that informs, contextualizes, or complicates. A video of an airstrike performs better than an analysis of why the airstrike happened. An image of a child casualty generates more shares than a report on the ceasefire negotiations that might prevent the next one.
Platforms did not design this to favor any particular side in any particular conflict. They designed it to make money. The political consequences — the distorted information environment that results — are externalities that platforms have consistently declined to internalize.
Gaza and the Algorithm at Scale
The conflict in Gaza that began in October 2023 became the most documented war in human history in real time — and simultaneously one of the most algorithmically distorted.
Multiple investigations by journalists, researchers, and advocacy groups documented systematic disparities in how content moderation was applied across the conflict. Meta — the parent company of Facebook and Instagram — acknowledged errors in content moderation that disproportionately affected Arabic-language content related to Palestinian perspectives. The company attributed these errors to automated systems that flagged certain Arabic phrases as violating community standards when they did not.
On the other side, pro-Israeli content was amplified in some contexts and suppressed in others, depending on which platform, which country, and which moment in the conflict’s trajectory. The inconsistency was not the product of a coherent editorial policy. It was the product of automated systems making millions of decisions per second with imperfect training data and no genuine understanding of context.
The practical effect was that different users — in different countries, with different browsing histories — experienced entirely different versions of the same conflict. Not different interpretations of shared facts, but different facts, different images, different casualty figures, and different framings of who was responsible for what.
State Actors and the Information Weapon
The algorithm’s vulnerability to emotional content is not a secret. State actors and their proxies have built entire operations around exploiting it.
Russia’s Internet Research Agency — the troll farm revealed by the Mueller investigation — was not primarily in the business of producing sophisticated propaganda. It was in the business of producing emotionally charged content that the algorithm would amplify for free. Fake accounts, inflammatory posts, divisive memes — content designed not to persuade but to provoke, to divide, and to exhaust the information environment with noise until credible signal became impossible to find.
This model has been replicated by state actors across multiple conflicts. During the war in Sudan, pro-RSF and pro-SAF social media operations both attempted to shape international perception of the conflict through coordinated inauthentic behavior — networks of accounts posting similar content simultaneously to create the appearance of organic sentiment.
The operations are not always sophisticated. They do not need to be. The algorithm does the amplification work. A coordinated network needs only to produce content that gets picked up by genuine users — who share it without verifying it — and the algorithm takes it from there.
The Verification Collapse
Traditional journalism operated on a verification model: report what you can confirm, attribute what you cannot, and distinguish clearly between the two. This model was imperfect in practice, but it created a standard against which performance could be measured and failures could be identified.
The algorithmic information environment has no equivalent standard. Content that is false spreads at the same speed as content that is true — faster, in fact, because false content is often more emotionally provocative and therefore more algorithmically favored.
A 2018 study from MIT’s Media Lab found that false news spreads six times faster on Twitter than true news. The researchers found that false stories were more novel and provoked stronger emotional reactions — exactly the characteristics that recommendation algorithms are designed to reward.
In a conflict environment, this dynamic is lethal — not metaphorically but literally. False information about the location of civilians has directed violence toward them. False casualty figures have inflamed populations and triggered retaliatory attacks. False flag operations — events staged to be blamed on the other side — depend on the information environment’s inability to verify claims before they have already done their political work.
Who Controls the Narrative Controls the War
Military strategists have a term for this: information operations. The goal is not to win the physical battle alone but to win the story of the battle — to control what the international community believes happened, who was responsible, and what the appropriate response should be.
Russia’s information operations around its invasion of Ukraine were among the most extensive in modern military history. The narrative that NATO expansion had forced Russia’s hand, that the Ukrainian government was controlled by Nazis, that civilian casualties were staged — these claims were seeded across social media platforms, amplified by sympathetic media ecosystems, and repeated by political figures in Western countries who had absorbed them from the information environment without examining their origin.
None of this prevented Ukraine from receiving significant Western support. But it created enough ambiguity, enough “both sides,” enough noise to limit the speed and scale of that support — which is precisely what information operations are designed to do.
The Platform Accountability Gap
Social media platforms are not neutral infrastructure. They are editorial systems — making millions of decisions per second about what content to amplify, suppress, label, or remove. The decisions are made by algorithms, but the algorithms reflect choices made by humans about what to optimize for.
The political and legal framework for holding platforms accountable for these choices remains severely underdeveloped. In the United States, Section 230 of the Communications Decency Act provides platforms with broad immunity from liability for content published by their users. In the European Union, the Digital Services Act has begun to impose transparency and accountability requirements — but enforcement is slow and penalties have not yet reached a scale that changes platform behavior.
Meanwhile, the platforms themselves have reduced investment in trust and safety teams, content moderation, and misinformation research following financial pressures and political criticism from both left and right. The information environment in 2025 is more polluted, not less, than it was five years ago.
What You Can Do With This
Understanding the system does not make you immune to it. The emotional responses that algorithms exploit are not weaknesses unique to the uninformed — they are features of human cognition that affect everyone regardless of media literacy.
But awareness creates friction. Before sharing content about an active conflict, the question worth asking is not “does this feel true” but “where did this come from, and who benefits from me believing it.”
The algorithm is optimized to make that question feel unnecessary. The content arrives pre-loaded with emotional urgency that crowds out the pause required for verification. That urgency is not an accident. It is the mechanism.
The wars on your screen are real. The people dying in them are real. But the version of those wars that the algorithm serves you has been shaped by systems that have no interest in your understanding — only in your attention.
Those are not the same thing.
If this analysis interests you, read next: The AI Arms Race Nobody Voted For

