designContest

Cyborgs: Where is the line between man and misinformation machine?

Renee DiResta
MisinfoCon
Published in
5 min readAug 23, 2017

--

Media coverage of disinformation campaigns is focusing overmuch on bots, and under emphasizing the accounts that are really orchestrating disinformation campaigns: cyborgs. This is a look at how different automated account types and their human conductors work together to manufacture consensus, and brings up the question: where is the line between man and machine?

Bots are having a moment. Researchers have watched their numbers and effectiveness increase, steadily, over the past five years. As evidence of disinformation and manipulation campaigns in the global 2016 and 2017 election cycles began to emerge, the mainstream media have begun to cover the topic with increasing frequency. Now, everyone is paying attention.

As a society, we’ve lost trust in many of our institutions. We’re beginning to lose trust in online social interactions with each other. Click into a conversational thread on a Donald Trump Facebook page or tweet, and you’ll see something disheartening: people on opposite sides of the political aisle accusing each other of being bots or sockpuppets. It’s very similar to the kind of paranoid posturing long observed in conspiracist communities (which are on the rise): if a conversational counterpart disagrees, they must be a paid shill.

This type of dismissiveness enables the conspiracist to avoid confronting any kind of evidence contrary to their beliefs; they preserve their conviction that it’s them against a manipulative, false-flagging, shill-controlled world. As people read and hear about the increasing prevalence of bots on social networks — a recent study places their numbers at 9–15% of the total active user base on Twitter — the degree of suspicion in online interactions appears to be growing. And that is a problem.

Increasing mainstream awareness of manipulative, automated activity is leading to calls for action. Pleas to shut down disinformation bots and demands for tech companies to step up are coming from political leaders as well as everyday people. Social network platforms have the technological capability to identify and flag many (if not most) of the thousands of active automated accounts, even if the most sophisticated elude them; professional marketing tools can already do this. The problem is not a technology issue…it’s a backbone issue, a priorities issue. Until 2016, there wasn’t much urgency. That will change, as public opinion shifts, and legislators and regulators — particularly in Europe and the UK — begin to threaten regulation.

Shutting the bots down seems like low-hanging fruit in the war to restore trust and eliminate disinformation on social platforms. Bots seem unique, possibly because they engage with people under the guise of a specific identity. They have a name, and an avatar. But despite their human affect, bots aren’t people — they’re just code — and they don’t deserve the same considerations that Facebook and Twitter give to protecting the right to expression of “real” people on their platforms. However, there’s a reason why shutting down bot accounts is unlikely to have a major impact in stopping social manipulation: in the most effective disinformation campaigns, the accounts doing the real work are cyborgs, or accounts that merge bot and “real human” behavior.

A cyborg is an account that behaves like a bot sometimes and a human at other times. A very significant portion of their output is automated but a human occasionally takes the reins. Cyborgs aren’t new — the term was already being used by Twitter back in 2012, in its own published research on detecting automated accounts (at that time, the main concern was spam). It’s a term that’s going to get a lot more popular as the public conversation around disinformation gets more sophisticated, because these are the accounts that matter.

In disinformation campaigns, cyborgs lend legitimacy — especially in state-sponsored campaigns, where dozens or hundreds of real people can be employed to run accounts. Here’s how it works: a message is seeded by an initial account. The goal is to get that message into the realm of awareness of a target audience — think of this as a marketing campaign for an idea. Some common strategies to accomplish this are making a hashtag or article trend, dominating share of voice in an existing hashtag (spamming and hijacking it), or having a particular tweet appear in the “Top” spot on the various social platforms. A successful operation gets a secondary bump when the news media sees it trending and reports on it.

Simple bots are great for getting amplification that attracts attention. They’re cheap to build or buy; people with no prior coding experience can follow directions from a basic botmaking tutorial online, such as this one to create a bot that pulls text from a google spreadsheet, or use a tweet scheduler, or IFTTT. It’s easy to deploy dozens of dumb retweeting, Like-ing machines. They’re often detected precisely because they’re so prolific; their accounts show volumes of activity that a human couldn’t produce. Anonymous account producing thousands of tweets that amplify one point of view with no other unique activity? Bot. But as social platforms fight back in the digital arms race — shadowbanning suspect accounts, or rethinking their ranking criteria — simple amplification bots are no longer enough.

Now, responder bots and cyborgs are an increasingly necessary part of the disinformation campaign toolkit. Cyborgs are harder to identify than bots, especially for algorithmic bot-detection tools, because the human involvement masks the predictable signs of a basic bot. They engage in conversations and make sophisticated linguistic choices; they post a wider range of content, not just endless retweets. They comment back to an initial tweet, because engaging in a brief conversation makes the content appear more legitimate. Instead of behaving like robots, cyborgs act like humans, but faster, louder and for longer. In fact, if someone calls out the account as a bot, the owner will often take the reins: “I’m not a bot!” The effect is online activity that looks like a grassroots movement, but in fact, it’s still the march of the machines.

Since real people are involved, the decision to kill the account is thornier. Is a cyborg account a tool designed to amplify free expression, or a manipulative effort to have a disproportionate impact in a conversation? How many accounts should one person be permitted to have? What percentage of the content can be automated before an account crosses the line into “full bot”?

Manipulation and disinformation campaigns are impacting us, financially, socially, and politically. Mass collective action has evolved into automated coordinated action; manufactured consensus and influence operation campaigns are disrupting our conversations on many important policy issues, not just elections. The status quo isn’t working; as Madeleine Albright put it recently, “We need a 21st century response.” Perhaps a flag to alert users to the purely automated hordes is a good first step. We have blue checkmarks that verify humans.

It seems perfectly reasonable to have a similar bot notation (perhaps a red 🤖) so that users clearly know when they are engaging with an automated account; companies like Slack are already doing this. At a minimum, this could be a meaningful step toward restoring trust (or at least, reducing cynicism) on Twitter. And if they aren’t already, platforms should strongly consider reduce the weight of clearly automated contributions when they’re calculating what’s trending. These tweaks would make it a bit more difficult to run a disinformation campaign.

But we still have to decide what to do about the cyborgs.

--

--

I work in tech, and occasionally write about the intersection of tech + policy.