Removing Bad Actors from Facebook 

By Nathaniel Gleicher, Head of Cybersecurity Policy

Facebook has given millions of people a voice all around the world — and it is important the conversations taking place on our service every day are authentic.

As part of our broader efforts to prevent abuse and ensure Facebook is a safe place for everyone, we recently took down more than 10,000 fake Pages, Groups, and accounts in Mexico and across Latin America because they violated our Community Standards. The content we’ve found broke our policies on coordinated harm and inauthentic behavior, as well as attacks based on race, gender or sexual orientation.

There is no place on Facebook for this kind of behavior — and we’re investing heavily in both people and technology to keep bad content off our services. For example, we took down 837 million pieces of spam and 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone reported the issue to Facebook. By using technology like machine learning, artificial intelligence and computer vision, we can proactively detect more bad actors and take action more quickly.

This kind of action is especially important in the run-up to elections, when people use Facebook to connect with candidates, debate hard issues, and get information about the decisions they face. It’s why we’ve been working so hard to prevent election interference by doubling down on fake accounts that spread misinformation; working with third-party fact-checkers to stop the spread of false news; and making advertising much more transparent.