Don’t Expect a Technological Solution to Fake News

A technological solution to fake news would only make our collective inability to critically appraise news worse.

A technological solution to fake news would only make our collective inability to critically appraise news worse.

Maybe I’ll get charged for spouting fake news for saying this, but there’s nothing Facebook, Google or any other tech company can do to stamp out fake news. No way, they can’t do it, and this “can’t” works on several levels. It’s not technically feasible, it’s not practically feasible, and it’s politically and morally dubious. Yet ever since the election, the likes of Facebook have been called upon from numerous directions to solve their “fake news problem.” More incredibly still, they’ve now begun listening to their critics, with Google and Twitter also vowing to tackle the problem that allegedly cost Hillary Clinton the election.

However, despite their well-meaning assurances and their earnest attempts to purify their respective brands, their fight against fake news can only be tokenistic at best. At best, they and their employees could perhaps block or tag the most egregious examples of fake news, yet when it comes to clamping down on false articles and misinformation in any substantial or systematic way, they’re bound to fail.

From a purely technical perspective, this is because there is no algorithm that could reliably distinguish fake news from true news. This may be something of a rash judgement, insofar as there’s currently no dedicated fake-news algorithm being used by Facebook et al., yet there are parallels and precedents which suggest that any such algorithm would be a crude instrument.

For one, there’s Facebook’s own clickbait-detection algorithm, which classifies as ‘clickbait’ any article with a headline that uses certain phrases and that, in classic clickbait style, withholds information. As effective as this might be in the case of clickbait, any fake-news algorithm based on such principles would be markedly ineffective, since fake news doesn’t necessarily have to use a particular style or particular vocabulary. Unlike clickbait, fake news isn’t an identifiable way of presenting news. Rather, it’s simply any news that is false, that asserts false things of its subjects. Most importantly, such falseness is a property attaching not to the form of an article’s text, but to the correspondence of this text to the world. As such, establishing whether or not an item is an example of fake news would be way beyond the scope of a clickbait-detection algorithm.

However, there seems to be at least one prototype of a fake-news algorithm that does hold out some hope for Facebook, Google and Twitter. Recently, a group of university students designed an algorithm called FiB that tags links in Facebook news feeds as either “verified” or “not verified.” It manages this by “cross-checking” the content of an article with that of other articles, and by determining the “credibility” of the article’s source. In other words, it searches the web to find matches for the statements made by any given article, checking to see whether other content asserts the same things of a particular subject (e.g. Hillary Clinton or the Pope) as the piece it’s analysing.

Assuming that FiB really does what it says it does (as of writing it’s currently unavailable for download), it certainly seems promising, not least because it won the Google-sponsored “Best Moonshot” award at November’s HackPrinceton. However, despite the massive interest Google and Facebook purportedly have in tackling fake news, neither firm have made any attempt to acquire it. And even if it does successfully cross-check content, it will face the fundamental problem that multiple sources can be wrong about the same thing.

Indeed, one of the defining characteristics of viral news is that it spreads quickly, with website after website often passing on sensationalist untrue news for the sake of clicks or a big scoop. This is what happened, for instance, with the mistaken claim that paid anti-Trump protestors were being bussed to a demonstration in Austin on November 10, with numerous news sources of questionable repute soon running essentially the same article. It’s also what happened in countless other cases, with CNN even running a story in 2012 on how supposedly trustworthy outlets such as itself can get caught up in ‘reporting’ on, among other things, under-tipping bankers.

It’s because misinformation can spread like this that any cross-checking algorithm will inevitably verify what are little more than popular or common lies. It may weed out one or two fake news pieces on solitary crank websites, yet it will also miss many that are seemingly corroborated in more than one place. Because of this, it could even end up being counterproductive, encouraging people to trust what isn’t trustworthy.

So much for technological solutions to the fake news problem then, yet more practical and human-directed measures would be highly problematic as well. This is partly because they’d be impracticably labor-intensive, which is something a company like Facebook — who recently dismissed their entire trending team — wants to avoid. More worryingly, they’re also vulnerable to political bias, meaning that any group of people flagging items as ‘fake news’ may be accused of merely cracking down on views they dislike. Something much like this happened in May, when Gizmodo reported on how ex-Facebook employees had apparently “suppressed news stories of interest to conservative readers.”

Even if there’s never any deliberate conspiracy to curb particular views, it’s almost inevitable that any team would treat articles they don’t personally sympathise with to heavier scrutiny, with the longer term result being an unfortunate skew in the overall distribution of articles. It’s for this reason that a top-down practical approach to fake news would be unfair and imperfect, just as a top-down technical approach would be blind to mutual falsities.

Instead, what’s ultimately needed is an increase in literacy and awareness with regards to fake news, so that more of the public can improve their ability to distinguish unreliable from reliable news sources by themselves. Not only would this teach people to exercise more caution in a world of ever-expanding info, but it would avoid an unfortunate situation where we became over-dependent on technology and tech firms for putting us in touch with (what isn’t necessarily) the truth. In fact, if we were to become too dependent on tech for the truth and thereby too uncritical, then any fake-news algorithm would’ve ironically exacerbated the very source of the problem it’d been designed to solve.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: