Citizen journalism could become collateral damage in the fight against fake news

Samantha Burton
MisinfoCon
Published in
4 min readJan 3, 2018

--

A woman films an Occupy Wall Street demonstration on her flip phone. Photo by Nick Gulotta

Many proposals for how to tackle misinformation online involve some form of automation that would help verify an article’s credibility. But there is a potentially serious unintended consequence for this approach: successful efforts to stymie the flow of ‘fake news’ may also unintentionally stifle the democratization of communication and citizen journalism that have enriched our understanding of the world.

YouTube gave us a dismal preview of what this might look like, when it inadvertently removed thousands of user-generated videos last year documenting airstrikes in Syria. The platform’s machine learning algorithm was designed to automatically flag and remove content that potentially violated the platform guidelines, but it wasn’t nuanced enough to distinguish between extremist propaganda and citizen journalism. Advocates say the loss may jeopardize future war crimes prosecutions.

While YouTube’s algorithm wasn’t hunting for misinformation, it’s not hard to imagine citizen journalism suffering similar unintended consequences on a grand scale, as platforms increasingly turn to automation to help weed out ‘fake news’ online. Facebook uses machine learning algorithms to identify suspicious articles for human fact-checkers to examine. Google tweaked its search algorithm to bury ‘fake news’ deep in the results. Twitter is developing tools to better detect “spammy behaviors” that may indicate attempts to spread misinformation.

Built right into these tools are assumptions about what makes a piece of news trustworthy. One of these critical assumptions lies in how algorithms are trained to answer this question: Is the source of this information credible?

It may be tempting to define a ‘credible’ source as ‘professional’ or ‘official.’ Although trust in news media varies widely around the world and has decreased in recent years, some research suggests that misinformation is driving people back to recognizable news media outlets and journalists. It’s also easier to train an algorithm to recognize professional media and journalists. Evaluating whether news posted by a private citizen is credible needs to be done on a case-by-case basis, making it much more complex and time-consuming.

However, just because information appears to be coming from a professional media outlet doesn’t mean it’s crediblethe impostor CNN and BBC content that circulated during Kenya’s election last year are testament to that. Official government sources worldwide are also known to manipulate the messages they share with the public. Training an algorithm to equate ‘credible’ with ‘professional’ or ‘official’ isn’t a foolproof way to stop misinformation.

If an algorithm was designed to prioritize professional media sources, we also may not know about Dr. David Dao being violently dragged off a United Airlines flight. We may not have video evidence of the attack at the “Unite the Right” rally in Charlottesville that left a young woman dead and 19 others injured. We may not have seen Hurricane Harvey leaving a group of elderly women waist-deep in water in their nursing home. They were later rescued, thanks to that photo going viral.

These headlines were sparked by an average person with a smartphone. But, like the Syrian citizens who filmed the airstrikes they experienced, these important stories and others are at risk of getting caught and discarded if the misinformation net we cast is too wide.

This doesn’t mean we should design systems that automatically trust all user-generated content. That wouldn’t get us very far in the fight against misinformation. But there are three things in particular we can do to help ensure that opportunities for quality citizen journalism, and other benefits of the open web, are preserved as we seek to curb fake news online.

Anticipate unintended consequences — and design tools to help mitigate them.

The idea that design can have unintended consequences is not revolutionary, but we would do well to make it more present in the technology sector. If YouTube had anticipated that its algorithm might unintentionally remove citizen coverage of airstrikes in Syria, they may have been able to tweak its design to decrease the risk of that happening before thousands of videos were lost.

Look to diverse people to lead design and implementation.

A Syrian engineer working on YouTube’s algorithm may have been better positioned to flag that the approach could be putting citizens’ videos at risk. The more diverse the experiences and perspectives of our teams are, the more well-equipped we are to fight misinformation and safeguard equity.

Resist the urge to rely too heavily on automation.

Technology is a vital tool in our efforts to combat misinformation, given that computers can process large swaths of information far faster than humans. But algorithms alone won’t rid the web of fake news. Media literacy and the ability to deal with complexity and nuance is also vital for success, both in the fact-checkers who work to verify news and in the audiences who read the stories.

Citizen journalism could be collateral damage in the fight against misinformation, but it doesn’t have to be.

--

--