How People Help Fight False News

To fight the spread of false news, Facebook uses a mix of technological solutions, human reviewers, and educational programs that promote news literacy. The Facebook community plays an important role, too: people can give feedback that a post on Facebook contains false news by clicking the three dots at the top of the post. But how does Facebook use that information?

To rank posts in your News Feed, Facebook looks at thousands of different data points, or signals, about the posts that have been shared by your community, including the people you’re friends with and the Pages you follow. Signals include a wide range of things, from how old a given post is and who posted it to little things, like how fast your internet connection is right now or what kind of phone you’re using.

One application of these signals is to help determine whether a post might be clickbait, false news, or other types of inauthentic content. Today, in the US, the signals for false news include things like whether the post is being shared by a Page that’s spread a lot of false news before, whether the comments on the post include phrases that indicate readers don’t believe the content is true, or whether someone in the community has marked the post as false news.

Facebook uses a machine learning classifier to compile all of those misinformation signals and — by comparing a given post to past examples of false news — make a prediction: “How likely is it that a third-party fact-checker would say this post is false?” (Facebook uses classifiers for a lot of different things, like predicting whether a post is clickbait or contains nudity; you can read more in this roundtable interview from Wired). The classifier’s predictions are then used to determine whether a given piece of content should be sent to third-party fact-checkers. If a fact-checker rates the content as false, it will get shown lower in people’s News Feeds and additional reporting from fact-checkers will be provided.

More feedback from more people helps make the classifier more accurate, but feedback in and of itself doesn’t trigger the fact-checking process. That’s because people may mean different things when they mark a post — they might disagree with the content, or dislike the Page or person posting it. “There are no clean signals about false news,” says Antonia Woodford, a product manager on the News Feed integrity team. “If there were, it would be easier to get rid of.” That’s why the misinformation classifier takes a range of things into account — though user feedback is one of the most important.

Facebook is continually working to improve its classifiers. Classifiers learn to make their predictions by looking at a variety of examples of the thing they’re trying to identify — so the more data collected, the better the classifier gets and the more precisely it can sift through signals to find meaningful patterns. Facebook also has to make sure that it’s serving people in different cultural and linguistic contexts, so classifiers also have to be trained to be sensitive to regional and linguistic differences, as well as to cultural norms. This means that the variety of misinformation signals Facebook relies upon in any given country, and the weight that is assigned to each, is not static.

Want to help reduce the spread of false news in an even more immediate way? Learn how to recognize it and then refrain from sharing it — check out these tips for spotting false news.

Be skeptical of headlines. False news stories often have catchy headlines in all caps with exclamation points. If shocking claims in the headline sound unbelievable, they probably are. Look closely at the link. A phony or look-alike URL may be a warning sign of false news. Many false news sites mimic authentic news sources by making small changes to the URL. You can go to the site to compare the URL to established sources. Investigate the source. Ensure that the story is written by a source that you trust with a reputation for accuracy. If the story comes from an unfamiliar organization, check their “About” section to learn more. Watch for unusual formatting. Many false news sites have misspellings or awkward layouts. Read carefully if you see these signs. Consider the photos. False news stories often contain manipulated images or videos. Sometimes the photo may be authentic, but taken out of context. You can search for the photo or image to verify where it came from. Inspect the dates. False news stories may contain timelines that make no sense, or event dates that have been altered. Check the evidence. Check the author’s sources to confirm that they are accurate. Lack of evidence or reliance on unnamed experts may indicate a false news story. Look at other reports. If no other news source is reporting the same story, it may indicate that the story is false. If the story is reported by multiple sources you trust, it’s more likely to be true. Is the story a joke? Sometimes false news stories can be hard to distinguish from humor or satire. Check whether the source is known for parody, and whether the story’s details and tone suggest it may be just for fun. Some stories are intentionally false. Think critically about the stories you read, and only share news that you know to be credible.

 

See also:
Facing Facts, a behind-the-scenes look at Facebook’s fight against false news
The Three-Part Recipe for Cleaning up Your News Feed