Since December of last year, following the slew of fake news revolving around the presidential elections, Facebook has been promising its users that it will filter out false content. In anticipation, the company released a concept video, demonstrating how Facebook intended to undertake this impossible task:

Now it looks as though Facebook has finally made good on its promise. Just a few days ago, it finally released a tool which flags fake news in the newsfeed. Anna Merlan, an investigative reporter for was one of the first people to see the new update in action, tweeting:

The feature involves twofold input in order to successfully eliminate posts deemed as ‘fake news’. First and foremost, users are able to report a post by clicking the upper right hand corner of a post and selecting “It’s a fake news story”. Posts like these then are analysed by a team of third-party fact checkers consisting of ABC News, Politifact, FactCheck, and Snopes. These four companies will then flag suspicious stories, inserting the ‘disputed’ bar below it.

However, although having false news eliminated from our lives is an appealing concept, we have to ask ourselves whether this system is practical or functional? Furthermore, how do we know that we can trust these organizations to perform their duties in an unbiased manner?

Technically, all fact-checks are required to sign a “Code of Principles” in which they promise to approach all material in an unbiased manner. But, considering Facebook’s past experimentation with information, whether to influence mass emotion or change the course of the elections, many people are suspicious.

As Sophie Kleeman of Gizmodo concludes in her article about Facebook’s new update:

Of course, Facebook is an incredibly powerful company that carries out its operations in opaque and mysterious ways, so it’s difficult to tell whether or not these efforts will actually do much of anything. At the end of the day, it’s still asking us to trust it to do the right thing, which is a very risky prospect indeed.