Hard Questions: More on Russian Ads

By Elliot Schrage, Vice President of Policy and Communications

1) Why did Facebook finally decide to share the ads with Congress?

As our General Counsel has explained, this is an extraordinary investigation — one that raises questions that go to the integrity of the US elections. After an extensive legal and policy review, we’ve concluded that sharing the ads we’ve discovered with Congress, in a manner that is consistent with our obligations to protect user information, will help government authorities complete the vitally important work of assessing what happened in the 2016 election. That is an assessment that can be made only by investigators with access to classified intelligence and information from all relevant companies and industries — and we want to do our part. Congress is best placed to use the information we and others provide to inform the public comprehensively and completely.

2) Why are you sharing these with Special Counsel and Congress — and not releasing them to the public?

Federal law places strict limitations on the disclosure of account information. Given the sensitive national security and privacy issues involved in this extraordinary investigation, we think Congress is best placed to use the information we and others provide to inform the public comprehensively and completely. For further understanding on this decision, see our General Counsel’s post.

3) Let’s go back to the beginning. Did Facebook know when the ads were purchased that they might be part of a Russian operation? Why not?

No, we didn’t.

The vast majority of our over 5 million advertisers use our self-service tools. This allows individuals or businesses to create a Facebook Page, attach a credit card or some other payment method and run ads promoting their posts.

In some situations, Facebook employees work directly with our larger advertisers. In the case of the Russian ads, none of those we found involved in-person relationships.

At the same time, a significant number of advertisers run ads internationally, and a high number of advertisers run content that addresses social issues — an ad from a non-governmental organization, for example, that addresses women’s rights. So there was nothing necessarily noteworthy at the time about a foreign actor running an ad involving a social issue. Of course, knowing what we’ve learned since the election, some of these ads were indeed both noteworthy and problematic, which is why our CEO today announced a number of important steps we are taking to help prevent this kind of deceptive interference in the future.

4) Do you expect to find more ads from Russian or other foreign actors using fake accounts?

It’s possible.

When we’re looking for this type of abuse, we cast a wide net in trying to identify any activity that looks suspicious. But it’s a game of cat and mouse. Bad actors are always working to use more sophisticated methods to obfuscate their origins and cover their tracks. That in turn leads us to devise new methods and smarter tactics to catch them — things like machine learning, data science and highly trained human investigators. And, of course, our internal inquiry continues.

It’s possible that government investigators have information that could help us, and we welcome any information the authorities are willing to share to help with our own investigations.

Using ads and other messaging to affect political discourse has become a common part of the cybersecurity arsenal for organized, advanced actors. This means all online platforms will need to address this issue, and get smarter about how to address it, now and in the future.

5) I’ve heard that Facebook disabled tens of thousands of accounts in France and only hundreds in the United States. Is this accurate?

No, these numbers represent different things and can’t be directly compared.

To explain it, it’s important to understand how large platforms try to stop abusive behavior at scale. Staying ahead of those who try to misuse our service is an ongoing effort led by our security and integrity teams, and we recognize this work will never be done. We build and update technical systems every day to make it easier to respond to reports of abuse, detect and remove spam, identify and eliminate fake accounts, and prevent accounts from being compromised. This work also reduces the distribution of content that violates our policies, since fake accounts often distribute deceptive material, such as false news, hoaxes, and misinformation.

This past April, we announced improvements to these systems aimed at helping us detect fake accounts on our service more effectively. As we began to roll out these changes globally, we took action against tens of thousands of fake accounts in France. This number represents fake accounts of all varieties, the most common being those that are used for financially-motivated spam. While we believe that the removal of these accounts also reduced the spread of disinformation, it’s incorrect to state that these tens of thousands of accounts represent organized campaigns from any particular country or set of countries.

In contrast, the approximately 470 accounts and Pages we shut down recently were identified by our dedicated security team that manually investigates specific, organized threats. They found that this set of accounts and Pages were affiliated with one another — and were likely operated out of Russia.

Read more about our new blog series Hard Questions. We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to This email address is being protected from spambots. You need JavaScript enabled to view it..