Misinformation has stages

Sara-Jayne Terp
MisinfoCon
Published in
9 min readMay 13, 2019

--

Now we just need to work out what the stages should be…

misinformation pyramid

The Credibility Coalition’s Misinfosec Working Group (“MisinfosecWG”) maps information security (infosec) principles onto misinformation. Our current work is to develop a tactics, techniques and procedures (TTP) based framework that gives misinformation researchers and responders a common language to discuss and disrupt misinformation incidents.

We researched several existing models from different fields, looking for a model that was both well-supported and familiar to people, and well suited for the variety of global misinformation incidents that we were tracking. We fixed on stage-based models, that divide an incident into a sequence of stages, e.g. “recon” or “exfiltration”, and started work mapping known misinformation incidents to the ATT&CK framework, which is used by the infosec community to share information about infosec incidents. Here’s the ATT&CK framework, aligned with its parent model, the Cyber Killchain:

Cyber Killchain stages (top), ATT&CK framework stages (bottom)

The ATT&CK framework adds more detail to the last three stages of the Cyber Killchain. These stages are known as “right-of-boom,” as opposed to the four “left-of-boom” Cyber Killchain stages, which happen before bad actors gain control of a network and start damaging it.

Concentrating on the ATT&CK model made sense when we started doing this work. It was detailed, well-supported, and had useful concepts, like being able to group related techniques together under each stage. The table below is the Version 1.0 strawman framework that we created; an initial hypothesis about the stages with example techniques that a misinformation campaign might use.

Table 1: Early strawman version of the ATT&CK framework for misinformation [1]

This framework isn’t perfect. It was never designed to be perfect. We recognized that we are dealing with many different types of incidents, each with potentially very different stages, routes through them, feedback loops and dependencies (see the Mudge quote below), so we created this strawman to start a conversation about what more is needed. Behind that, we started working in two complementary directions: bottom-up from the incident data, and top-down from other frameworks that are used to plan similar activities to misinformation campaigns, like psyops and advertising.

ATT&CK may be missing a dimension…

The ATT&CK framework has missing dimensions, which is why we introduced the misinformation pyramid. A misinformation campaign is a longer-scale activity (usually months, sometimes years), composed of multiple connected incidents — one example is the IRA campaign that focussed on the 2016 US elections. The attackers designing and running a campaign see the entire campaign terrain: they know the who, what, when, why, how, the incidents in that campaign, the narratives (stories and memes) they’re deploying, and the artifacts (users, hashtags, messages, images, etc.) that support those narrative frames.

Defenders generally see just the artifacts, and are left guessing about the rest of the pyramid. Misinformation artifacts are right-of-boom: the themes seemingly coming out of nowhere, the ‘users’ attached to conversations, etc. This is what misinformation researchers and counters have typically concentrated on. This is what the ATT&CK framework is good at, and why we have invested effort on it by cataloguing and breaking campaigns and incidents down into techniques, actors, action flows.

misinformation pyramid

But this only covers part of each misinformation attack. There are stages “left-of-boom” too. Although difficult to identify, there are key artifacts in this campaign phase too. This is the other part of our work. We’re working from the attacker point of view, listing and comparing stages we’d expect them to be working through, based on what we know about marketing/advertising, psyops and other analyses. We’ve compared a key set of stage-based models from these disciplines to the Cyber Killchain, as seen in the table below.

Table 2: Comparison between cyber killchain, marketing, psyops and other models

This is a big beast, so let’s look at its components.

First, the marketing funnels. These are about the journey of the end consumer of a marketing campaign — the person who watches an inline video, sees a marketing image online, and so on, and is ideally persuaded to change their view, or buy something related to a brand. This is a key consideration when listing stages: whose point of view is this? Do we understand an incident from the point of view of the people targeted by it (which is what marketing funnels do), the point of view of the people delivering it (most cyber frameworks), or the people defending against it? We suggest that the correct point of view for misinformation is that of the creator/attacker, because attackers go through a set of stages, all of which are essentially invisible to a defender, yet each of these stages can potentially be disrupted.

Marketing funnels from Moz and Singlegrain

Marketing funnels, meanwhile, are “right-of-boom.” They begin at the point in time where the audience is exposed to an idea or narrative and becomes aware of it. This is described as the “customer journey,” which is a changing mental state, from seeing something to taking an interest in it, to building a relationship with a brand/idea/ideology, and subsequently advocating it to others.

This same dynamic plays out in online misinformation and radicalisation (e.g. Qanon effects), with different hierarchies of effects that might still contain the attraction, trust and advocacy phases. Should we reflect these in our misinformation stage list? We can borrow from the marketing funnel and map these stages across to the Cyber Killchain (above), and by adding in stages for marketing planning and production (market research, campaign design, content production, etc.) and seeing how they are similar to an attacker’s game plan, we can begin planning how to disrupt and deny these left-of-boom activities.

When considering the advocacy phase, in relation to other misinformation models, we see this fitting the ‘amplification’ and ‘useful idiot’ stages (as noted above in Table 2). This is new thinking, and modeling how an ‘infected’ node in the system isn’t just repeating a message, but might be or become a command node too, is something to consider.

Developing the misinformation framework also requires adopting and acknowledging the role of psyops, as its point of view is clear: it’s all about the campaign producer who controls every stage, from a step-by-step list of things to do, from the start through to a completed operation, including hierarchy-aware things like getting sign-offs and permissions.

Left-of-boom, psyops maps closely to the marketing funnel, with the addition of a “planning” stage, while right-of-boom it glosses over all the end-consumer-specific considerations, in a process flow defined by “production, distribution, dissemination.” This does, however, add a potentially useful evaluation stage. One of the strengths of working at scale online is the ability to hypothesis test (eg. AB test) and adapt quickly at all stages of a campaign. Additionally, when running a set of incidents, after-action reviews can be invaluable in learning and adjusting the higher-level tactics such as adjusting the list of stages, the target platforms, or determining the most effective narrative styles and assets.

Psyops stages (https://2009-2017.state.gov/documents/organization/148419.pdf)

As we develop misinformation-specific stage-based models and see more of them (maybe it’s something to do with all the talks our misinfosec family have given?), things like Tactics, Techniques and Procedures (“TTPs”) and Information Sharing Analysis Center (“ISAC”) are appearing in misinformation presentations and articles. Two noteworthy models are the Department of Justice (DOJ) model and one recently outlined by Bruce Schnieier. First the DOJ model, which is a thing of beauty:

page 26 of https://www.justice.gov/ag/page/file/1076696/download

This clearly presents what each stage looks like from both the attacker (‘adversary’) and defender points of view (the end consumer isn’t of much interest here.) It’s a solid description of early IRA incidents, yet is arguably too passive for some of the later ones. This is where we start inserting our incident descriptions and mapping them to stages. This is where we start asking about how our adversaries are exercising things like command & control. When we say “passive”, we mean this model works for “create and amplify a narrative”, but we’re fitting something like “create a set of fake groups and make them fight each other”, which takes on a more active and more command & control-like presence. This is a great example of how we can create models that work well for some, but not all, of the misinformation incidents that we’ve seen, or expect to see.

We have some answers. More importantly, we have a starting point. We are now taking these stage-based models and extracting the best assets, methods, and practices (what looks most useful to us today), such as testing various points of view, creating feedback loops, monitoring activity, documenting advocacy, and so on. Our overarching goal is to create a comprehensive misinformation framework that covers as much of the incident space as possible, without becoming a big mess of edge cases. We use our incident analyses to cross-check and refine this. And we accept that we might — might — just have more than one model that’s appropriate for this set of problems.

“We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress.” ― Richard P. Feynman

Addendum: yet more models…

Ben Decker’s models look at the groups involved in different stages of misinformation, and the activities of each of those groups. This focuses on misinformation campaigns as a series of handoffs between groups: from the originators of content, to command and control signals via Gab/Telegram, etc., for signal receivers to post that content to social media platforms, then amplify its messages with social media messages that eventually get picked up by professional media. This has too many groups to fit neatly onto a marketing model, and appears to be on a different axis to psyops and DOJ models, but still seems important.

Ben Decker misinformation propagation models

As a further axis — the stage models we’ve discussed above are all tactical — the steps that an attacker would typically go through in a misinformation incident. There are also strategies to consider, including Ben Nimmo’s “four Ds” (Distort, Distract, Dismay, Dismiss — commonly-used IRA strategies), echoed in Clint Watt’s online manipulation generations. In infosec modelling, this would get us into a Courses of Action Matrix. We need to get on with creating the list of stages: we’ll leave that part until next time.

Clint Watts matrix, and 5Ds, with common tactics (from Boucher) mapped to them.

--

--