Hard Questions: Russian Ads Delivered to Congress

By Elliot Schrage, Vice President of Policy and Communications

What was in the ads you shared with Congress? How many people saw them?
Most of the ads appear to focus on divisive social and political messages across the ideological spectrum, touching on topics from LGBT matters to race issues to immigration to gun rights. A number of them appear to encourage people to follow Pages on these issues.

Here are a few other facts about the ads:

An estimated 10 million people in the US saw the ads. We were able to approximate the number of unique people (“reach”) who saw at least one of these ads, with our best modeling 44% of total ad impressions (number of times ads were displayed) were before the US election on November 8, 2016; 56% were after the election. Roughly 25% of the ads were never shown to anyone. That’s because advertising auctions are designed so that ads reach  people based on relevance, and certain ads may not reach anyone as a result. For 50% of the ads, less than $3 was spent; for 99% of the ads, less than $1,000 was spent.

Why do you allow ads like these to target certain demographic or interest groups?
Our ad targeting is designed to show people ads they might find useful, instead of showing everyone ads that they might find irrelevant or annoying. For instance, a baseball clothing line can use our targeting categories to reach people just interested in baseball, rather than everyone who likes sports. Other examples include a business selling makeup designed specifically for African-American women. Or a language class wanting to reach potential students.

These are worthwhile uses of ad targeting because they enable people to connect with the things they care about. But we know ad targeting can be abused, and we aim to prevent abusive ads from running on our platform. To begin, ads containing certain types of targeting will now require additional human review and approval.

In looking for such abuses, we examine all of the components of an ad: who created it, who it’s intended for, and what its message is. Sometimes a combination of an ad’s message and its targeting can be pernicious. If we find any ad — including those targeting a cultural affinity interest group — that contains a message spreading hate or violence, it will be rejected or removed. Facebook’s Community Standards strictly prohibit attacking people based on their protected characteristics, and our advertising terms are even more restrictive, prohibiting advertisers from discriminating against people based on religion and other attributes.

Why can’t you catch every ad that breaks your rules?
We review millions of ads each week, and about 8 million people report ads to us each day. In the last year alone, we have significantly grown the number of people working on ad review. And in order to do better at catching abuse on our platform, we’re announcing a number of improvements, including:

Making advertising more transparent Strengthening enforcement against improper ads Tightening restrictions on advertiser content Increasing requirements for authenticity Establishing industry standards and best practices

Weren’t some of these ads paid for in Russian currency? Why didn’t your ad review system notice this and bring the ads to your attention?
Some of the ads were paid for in Russian currency. Currency alone isn’t a good way of identifying suspicious activity, because the overwhelming majority of advertisers who pay in Russian currency, like the overwhelming majority of people who access Facebook from Russia, aren’t doing anything wrong. We did use this as a signal to help identify these ads, but it wasn’t the only signal. We are continuing to refine our techniques for identifying the kinds of ads in question. We’re not going to disclose more details because we don’t want to give bad actors a roadmap for avoiding future detection.

If the ads had been purchased by Americans instead of Russians, would they have violated your policies?
We require authenticity regardless of location. If Americans conducted a coordinated, inauthentic operation — as the Russian organization did in this case — we would take their ads down, too.

However, many of these ads did not violate our content policies. That means that for most of them, if they had been run by authentic individuals, anywhere, they could have remained on the platform.

Shouldn’t you stop foreigners from meddling in US social issues?
The right to speak out on global issues that cross borders is an important principle. Organizations such as UNICEF, Oxfam or religious organizations depend on the ability to communicate — and advertise — their views in a wide range of countries. While we may not always agree with the positions of those who would speak on issues here, we believe in their right to do so — just as we believe in the right of Americans to express opinions on issues in other countries.

Some of these ads and other content on Facebook appear to sow division in America and other countries at a time of increasing social unrest. If these ads or content were placed or posted authentically, you would allow many of these. Why?
This is an issue we have debated a great deal. We understand that Facebook has become an important platform for social and political expression in the US and around the world. We are focused on developing greater safeguards against malicious interference in elections and strengthening our advertising policies and enforcement to prevent abuse.

As an increasingly important and widespread platform for political and social expression, we at Facebook — and all of us — must also take seriously the crucial place that free political speech occupies around the world in protecting democracy and the rights of those who are in the minority, who are oppressed or who have views that are not held by the majority or those in power. Even when we have taken all steps to control abuse, there will be political and social content that will appear on our platform that people will find objectionable, and that we will find objectionable. We permit these messages because we share the values of free speech — that when the right to speech is censored or restricted for any of us, it diminishes the rights to speech for all of us, and that when people have the right and opportunity to engage in free and full political expression, over time, they will move forward, not backwards, in promoting democracy and the rights of all.

Are you working with other companies and the government to prevent interference that exploits platforms like yours?
The threats we’re confronting are bigger than any one company, or even any one industry. The kind of malicious interference we’re seeing requires everyone working together, across business, government and civil society, to share information and arrive at the best responses.

We have been working with many others in the technology industry, including with Google and Twitter, on a range of elements related to this investigation. We also have a long history of working together to fight online threats and develop best practices on other issues, such as child safety and counterterrorism. And we will continue all of this work.

With all these new efforts you’re putting in place, would any of them have prevented these ads from running?
We believe we would have caught these malicious actors faster and prevented more improper ads from running. Our effort to require US election-related advertisers to authenticate their business will help catch suspicious behavior. The ad transparency tool we’re building will be accessible to anyone, including industry and political watchdog groups. And our improved enforcement and more restrictive content standards for ads would have rejected more of the ads when submitted.

Is there more out there that you haven’t found?
It’s possible. We’re still looking for abuse and bad actors on our platform — our internal investigation continues. We hope that by cooperating with Congress, the Special Counsel and our industry partners, we will help keep bad actors off our platform.

Do you now have a complete view of what happened in this election?
The 2016 US election was the first where evidence has been widely reported that foreign actors sought to exploit the internet to influence voter behavior. We understand more about how our service was abused and we will continue to investigate to learn all we can. We know that our experience is only a small piece of a much larger puzzle. Congress and the Special Counsel are best placed to put these pieces together because they have much broader investigative power to obtain information from other sources.

We strongly believe in free and fair elections. We strongly believe in free speech and robust public debate. We strongly believe free speech and free elections depend upon each other. We’re fast developing both standards and greater safeguards against malicious and illegal interference on our platform. We’re strengthening our advertising policies to minimize and even eliminate abuse. Why? Because we are mindful of the importance and special place political speech occupies in protecting both democracy and civil society. We are dedicated to being an open platform for all ideas — and that may sometimes mean allowing people to express views we — or others — find objectionable. This has been the longstanding challenge for all democracies: how to foster honest and authentic political speech while protecting civic discourse from manipulation and abuse. Now that the challenge has taken a new shape, it will be up to all of us to meet it.

Read more about our new blog series Hard Questions. We want your input on what other topics we should address — and what we could be doing better. Please send suggestions to This email address is being protected from spambots. You need JavaScript enabled to view it..