News

Disinformation: What boosts it – and how teams can respond

Steve Durbin
Published 24 - February - 2026
Read the full article
sc media

With enterprise losses from disinformation projected to reach $30 billion by 2028, dismantling disinformation requires a measured, holistic strategy that addresses its three fundamentals: sources, spread, and response.

The origins of disinformation

Disinformation doesn’t come out of thin air: it has specific points of origin that we must identify so we can control its spread and impact. There are three primary instigators that carry out a disproportionate amount of narrative seeding: conspiracy theorists, political actors, and state-sponsored actors.

Conspiracy theorists are no longer fringe voices operating at the margins. Their ideas are now mainstreamed through repetition and amplification in mainstream media and across social media posts. Individuals and organizations find themselves responding to claims that feel “widely believed” long before they’re verified.

The disinformation matrix includes political actors who seek to influence public opinion for specific purposes, such as advocating policies, candidates, or ideologies, and state-sponsored actors advancing strategic geopolitical goals through deliberate narrative shaping.

Weaponization at scale

Disinformation velocity and scale make it an operational threat. Three drivers help scale operations:

1. Bots that create an illusion of consensus.

AI enables threat actors to create plausible narratives and personas, the first step in the escalation ladder. They manufacture momentum through bot networks, which account for about 51% of all web traffic. Bots ensure that various narratives are unavoidable and that such “claims” are reinforced by other narratives that bolster agreement via reposts, likes, threads, and alleged independent accounts repeating the same talking points. They focus not just on engineering, but on reengineering sentiments.

2. Platform gaps and inherent nature.

Let’s say a user posts 700 posts a day on a platform. Such a large volume should point to an immediate red flag, irrespective of the content. But enforcement remains lax. Even if a takedown happens, teams can recreate accounts quickly, enabling the operation to resume in short order.

Adding “fuel-to-the-fire” is the goal of these social platforms. Recommendation systems more often than not showcase sensationalist accounts. Its monetary structure encourages volume and time on the platform. Moderation cannot keep up. Together, these factors create ideal conditions for false narratives to outpace the velocity of truth.

3. The human element.

Disinformation becomes a force multiplier when it appears legitimate. Humans generate legitimacy. Once a narrative crosses into influential communities, which include commentators, insiders, employees, customers, and micro-communities, it crosses the trust threshold. It’s no longer just online noise, but has the influence to shape mindsets and decisions. At this point, disinformation can lead to reputational damage, regulatory pressure, consumer panic, or internal churn.

An effective response

For a proper response to the disinformation challenge, we need to get decision-grade clarity on the absolutely critical narratives, the speed at which they spread, and their capacity to influence decision-making.

The question we need to ask: Do we want to disrupt or control? If we de-platform an account spreading canards, it’s a disruptive measure, as the damage has already been done. Control, on the other hand, builds resilience, which contains the spread and enables a strategic response.

Here’s a practical checklist for leaders:

  • Identify narratives that can disrupt the organization.

The organization’s intelligence function must surface information that needs to be acted upon. This means identifying narratives that the team believes has the potential to create disruption, spread too quickly, or accepted as truth by the target audience. Establish processes for platform enforcement. Design strategic counters backed by evidence, provenance, and credibility, rather than forceful denials or cookie-cutter statements.

  • Stress-test the company against prevailing narratives.

Put leading stakeholders through mock scenarios that could include executive statements bolstered by deepfakes, the use of synthetic voices to establish believability, fabricated product reviews, or false geopolitical statements involving institutional heads. Clearly define a set of actions, rights, and validators before such incidents occur.

  • Invest in authenticity.

Deploy tools that support traceability, watermark official assets and monitor for synthetic media. Also, work with industry groups and platform trust-and-safety teams to spot emerging narratives before they become mainstream. Understand the “why” behind the campaign.

  • Measure impact.

Don’t measure disinformation by its noise level: Analyze whether it changes real behavior. Watch for signals like unusual search spikes, a sudden rise in support tickets, shifts in employee sentiment, or changes in sales velocity. Temper the team’s responses according to operational impact rather than social media noise.

Think of disinformation as a structural feature of our current information overload. Invest in tools and training to help identify the source of disinformation, slow the spread to limit its impact, and respond with verified facts and prepared processes. Build resilience over reaction so the company controls the terms of engagement.

Disinformation: What boosts it – and how teams can respond
Read the full article