Facebook has announced a raft of measures to prevent the spread of false information on its platform.
Writing in a company blog post on Friday, product manager Tessa Lyons said that Facebook’s fight against fake news has been ongoing through a combination of technology and human review.
However, she also wrote that, given the determination of some people to abuse the social network’s algorithms for political and other gains, “This effort will never be finished and we have a lot more to do.”
Lyons went on to announce several updates and enhancements as part of Facebook’s battle to control the veracity of content on its platform. New measures include expanding its fact-checking programme to new countries, and developing systems to monitor the authenticity of photos and videos.
Both are significant challenges in the wake of the Cambridge Analytica fiasco. While fake news stories are widely acknowledged or alleged to exist on either side of the left/right political divide, concerns are also growing about the fast-emerging ability to fake videos.
Meanwhile, numerous reports surfaced last year documenting the problem of teenagers in Macedonia producing some of the most successful viral pro-Trump content during the US presidential election.
Other measures outlined by Lyons include increasing the impact of fact-checking, taking action against repeat fake-news offenders, and extending partnerships with academic institutions to improve fact-checking results.
Machine learning to improve fact-checking
Facebook already applies machine learning algorithms to detect sensitive content. Though fallible, this software goes a long way toward ensuring that photos and videos containing violence and sexual content are flagged and removed as swiftly as possible.
Now, the company is set to use similar technologies to identify false news and take action on a bigger scale.
In part, that’s because Facebook has become a victim of its own success. With close to two billion registered users, one billion regularly active ones, and over one billion pieces of content posted every day, it’s impossible for human fact-checkers to review stories on an individual basis, without Facebook employing vast teams of people to monitor citizen behaviour.
Lyons explained how machine learning is being used, not only to detect false stories, but also to find duplicates of stories that have already been classed as false. “Machine learning helps us identify duplicates of debunked stories,” she wrote.
“For example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim.”
The big-picture challenge, of course, is that real science is constantly advancing alongside all the pseudoscientific theories. New or competing theories constantly emerge, while others are still being tested.
Facebook is also working on technology that can sift through the metadata of published images to check background information against the context in which they are used. This is because while fake news is a widely known problem, the cynical deployment of genuine content, such as photos, in fake or misleading contexts can be a more insidious problem.
Machine learning is also being deployed to recognise where false claims may be emanating from. Facebook filters are now actively attempting to predict which pages are more likely to share false content, based on the profile of page administrators, the behaviour of the page, and its geographical location.
Internet of Business says
Facebook’s moves are welcome and, many would argue, long overdue. However, in a world of conspiracy theories – many spun on social media – it’s inevitable that some will claim that the evidenced, fact-checked flagging-up of false content is itself indicative of bias or media manipulation. Claims that Facebook is suppressing freedom of speech inevitably follow.
In a sense, Facebook is engaged in an age-old battle, belief versus evidence, which is now spreading into more and more areas of our lives. Experts are now routinely vilified by populist politicians, even as we still trust experts to keep planes in the sky, feed us, teach us, clothe us, treat our illnesses, and power our homes.
Many false stories are posted on social platforms to generate clicks and advertising revenues through controversy – which is hardly a revelation. However, red flags can automatically be raised when, for example, page admins live in one country but post content to users on the other side of the world.
“These admins often have suspicious accounts that are not fake, but are identified in our system as having suspicious activity,” Lyons told Buzzfeed.
An excellent point. But some powerful media magnates also live on the other side of the world, including – for anyone outside of the US – Mark Zuckerberg. For some reason, however, no one seems to raise a warning flag when offshore newspaper men, such as Rupert Murdoch, seek to influence political debate on the other side of the Atlantic.