Below is the transcript from the July 24, 2018 press briefing related to Facebook’s work on elections.

Tom Reynolds: Thanks everyone for joining us today. As we mentioned earlier this year, our plan is to offer more regular briefings about the work we’re doing when it comes to elections around the world. This is work that includes helping people have a voice in the democratic process, learn more about candidates and issues, and hold elected officials accountable while at the same time fighting bad actors and combating misinformation.

Across the company, we are devoting significant time, energy, and resources to these issues. Today, we wanted to offer a few updates. First, we wanted to highlight some of the work we did as part of the recent Mexican elections held on July 1. Second, we’re going to give a broader update on this work that is happening around the world, with elections upcoming in Brazil, the US midterms and elsewhere. Third, we wanted to discuss our broader strategy for fighting false news and misinformation. Fourth, we wanted to offer an update on our information security work. And then lastly, offer some news on a few changes relating to our political ads archive.

I know that’s a lot, but we also know how important this work is, and we want to help you understand the full picture of how we are thinking about our elections work.

On the call today we have Diego Bassante, who’s from our LatAm Politics and Government Team; Samidh Chakrabarti, who oversees our Civic Engagement and elections work; Tessa Lyons, the Product Manager for the News Feed; Nathaniel Gleicher, who’s the head of our Cybersecurity Policy efforts and Rob Leathern, the Product Manager on the Ads Team.

Just one last thing before we get started — a bit of housekeeping. I just want to remind everyone this call is on the record, and that it is embargoed until the call concludes.

With that, I’ll kick it over to Diego to get us started.

Diego Bassante: Thanks, Tom. I’ll start off by saying that this election was an important one for Facebook in Mexico, representing almost a year of planning and work on different fronts, from encouraging civic participation, to protecting our platform from abuse, and fighting the spread of misinformation.

To help promote more civic engagement, we launched an Issues tab in the candidates’ Pages so people can learn more about the politicians’ perspectives in the candidates’ own voices, on issues like corruption, unemployment, security or the economy. We also provided a voter information center for people who want to engage in the election on Facebook, and learn more about the candidates.

This product allows people to see who is running for office at different levels of government, visit the candidates’ Facebook Pages to learn more about them, and also compare the candidates’ perspectives on important issues. On election day millions of Mexicans saw a message in their News Feed reminding them to vote and linking to additional information on voting stations.

Millions also used Facebook to share with their friends that they voted. And for the first time in history the presidential debates in Mexico were broadcast via Facebook Live — reaching the audiences where they are. And also for the third debate, we also helped INE source questions from Facebook for the presidential candidates.

Our work in Mexico also included efforts to improve the integrity on the platform. And just to highlight a few examples, we partnered with the Mexican fact-checking group, Verificado, to debunk false news, helping reduce the reach of misinformation content on our platform.

In advance of election day, we identified and took down tens of thousands of fake likes from the pages of Mexican candidates. We also took down fake Pages, groups, and accounts that violated our own Community Standards. And we proactively removed dozens of accounts that impersonated politicians running for office.

A key piece of our work stemmed from working closely with the National Electoral Institute, with INE. Through this partnership we were able to provide training to both the INE and the Federal Electoral Court on how our platform works. This, in order to increase transparency, improve security efforts, share best practices and help the authorities conduct their respective investigations.

We know there were some areas where we could have done better, such as moving faster against bad actors. But over all, we feel Facebook helped contribute to this year’s races and provide more Mexicans a chance to get involved.

With that let me turn to Samidh.

Samidh Chakrabarti: Thanks, Diego. Now, while that update focused on Mexico, I’ll point out that in the first half of this year we’ve done some more work all around the world like in Italy, Colombia, Turkey, and even the primaries for the US midterms. And by the end of the year we’ll have focused on more than 50 national elections all around the world.

But zooming out a bit, I want to remind folks about the strategic pillars we have in place when it comes to our integrity work and why they matter. There are four key areas that we continue to focus on.

The first is fighting fake accounts. Using machine learning we’ve gotten more effective at either blocking, or quickly removing fake accounts. And that’s important since they’re often the root of a lot of the bad activity that we see on our platform. And we’re now at the point that we stop more than a million accounts per day at the point of creation.

Second is combating fake news. We continue to expand this effort with 27 third-party fact-checking partners in 17 countries. And my colleague, Tessa, will expand on this work in a minute.

The third is around more ad transparency. We continue to bring increased levels of transparency to online political advertising which my colleague, Rob, will cover in more detail in a moment.

And the fourth is proactively monitoring for abuse. We’ve been able to apply powerful computational techniques that are traditionally used to fight spam and apply those to the civic space. And that’s really made great advancements recently in our ability to precisely detect and disrupt organized information operations. This helped power some of the examples that Diego raised earlier.

To make all this happen, we’re investing both in technology and in people. We continue to add team members dedicated to this work having recently passed 15,000 and closer to 20,000 people by the end of the year. As most of you know, we review reports of bad content in over 50 languages, and we do that 24 hours a day, seven days a week.

Another important tool is increased work with partners. This includes groups like the Digital Forensic Research Lab at the Atlantic Council, which works closely with our teams to get us real-time insights and intelligence on emerging threats and disinformation campaigns from all around the world. It also includes collaborations with government. Diego mentioned INE in Mexico and in Brazil, we’re working with a superior electoral court and regional court and electoral law enforcement authorities, particularly around online ads.

The last point I want to highlight is the product work we’re doing to support a more informed and engaged electorate as well. In the US, this election cycle, for example, we’re encouraging people on Facebook to register to vote for the midterms and helping people create their own voter registration drives to get their friends to vote. We’re also building products that make it easier for people to find high-quality information during an election.

Diego mentioned a few of these earlier, and with the goal of reducing barriers between candidates and potential voters and helping people interact directly with those running for office which we believe can really strengthen the democratic process.

With that, let me turn it over to Tessa to offer some updates on our misinformation work.

Tessa Lyons: Thanks, Samidh. We continue to work to try to fight the spread of false news. Over the past few months, we’ve expanded our work. Before we get to those updates, I want to take a minute to talk about our overall approach to false news because it’s gotten a lot of attention recently. We have a three-part approach for fighting false news on Facebook.

First, we remove anything that violates our Community Standards. These violations are often associated with the spread of false news. For example, fake accounts are used to seed and spread false content and we remove them. Second, we reduce the distribution of content that does not violate our Community Standards, but does undermine the authenticity of our platform. For example, we show stories rated false by fact-checkers lower in News Feed so dramatically fewer people see them. And third, we inform people with the stories they see in their News Feed. For example, we recently rolled out a context button on articles in the US and the UK to give people more information about the sources behind the articles they’re seeing.

We’ve gotten criticism lately about our decision to reduce the distribution of false news rather than remove it. Here’s how we think about this. If you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive.

And since not everyone agrees with where the line is on our Community Standards, people disagree with our actions on individual cases both when we remove things and when we don’t. Just because something is allowed to be on Facebook doesn’t mean it should get distribution.

We know people don’t want to see false information at the top of their News Feed and we believe we have a responsibility to prevent false information from getting broad distribution. This is why our efforts to fight disinformation are focused on reducing its spread. We’ve seen progress at this approach, but we know we have more work to do.

We recently announced ways in which we’re expanding our work to fight the spread of false news. On the remove part of our approach, last week we announced a new policy under which we’ll remove misinformation that is confirmed but trusted third parties to be contributing to or exacerbating imminent violence or physical harm and we’re in the process of implementing this policy.

On the reduce front, we’re scaling our partnership with fact checkers. We’ve expanded our fact checking program to 17 countries and also expanded our tests of photo and video fact checking. We’re using machine learning to increase the impact of our work. For example, we’re automatically identifying duplicates of stories rated false and we’re identifying and demoting foreign pages that are spreading financially motivated hoaxes.

And since we know that fact-checkers debunk stories on their websites that they don’t enter into our product, we shared last month that we’re going to start using a system called claim review in order to minimize extra work for fact-checkers and increase our ability to take action against Pages and domains that repeatedly share false news.

Now, for an update on our security work, let me turn it over to Nathaniel.

Nathaniel Gleicher: Thanks, Tessa. I just wanted to offer a brief update on the work we’re doing related to identifying and disrupting information operations on our platforms. Information operations are when sophisticated threat actors run coordinated efforts to mislead and manipulate the public. There are three important elements to every information operation — coordination among accounts, among Pages, or among offline groups; manipulation or deception and a strategic goal such as domestic or foreign interference.

When you combine these together, these are the types of malicious operations that we’re focused on and focused on disrupting whether they’re run by the IRA, by other foreign actors, or by the domestic operators that we’ve seen targeting public debate around the world. The team here at Facebook that tackles this problem include threat intelligence investigators, data scientists, product and engineering experts, colleagues from policy and legal and others. We also partner closely with all the other teams on this call.

Information operations are fundamentally a security problem. We’re dealing with adaptive persistent adversaries, not unlike the challenge that information security experts face stopping advanced persistent threats in the world of cyber-enabled espionage. You can never solve a security problem. Threat actors will constantly find new ways to cause harm. But what you can do is make it much, much harder for these actors to conduct information operations across our platforms.

The way we do this focuses on two essential types of work: manual investigations and automated detection and disruption. Manual investigations are the bread and butter of this team. These are like looking for a needle in a haystack. They are time consuming. They require heavy investment and skilled investigators but they’re critical because the threat actors in this space are continually innovating. We need our own sophisticated investigators constantly innovating on our end. The best example of our manual investigation is our recent disruptions of the IRA along with the disruption in Mexico.

The challenge with manual investigations is that by themselves they don’t scale; they can’t be our only tool to tackle information operations. For each investigation, we identify particular behaviors that are common across threat actors. And then we work with our product and engineering colleagues as well as everyone else on this call to automate detection of these behaviors and even modify our products to make those behaviors much more difficult.

A good example of this is our recent work to scale up our ability to automatically detect and remove fake accounts. We’ve seen repeatedly that information operators rely heavily on fake accounts so we’re drastically reducing their ability to use them.

If manual investigations are like looking for a needle in a haystack, our automated work is like shrinking that haystack. It reduces the noise in the search environment which directly stops unsophisticated threats. And it also makes it easier for our manual investigators to corner the more sophisticated bad actors. In turn, those investigations keep turning up new behaviors which fuels our automated detection and product innovation.

Our goal is to create this virtuous circle where we use manual investigations to disrupt sophisticated threats and continually improve our automation and products based on the insights from those investigations. Look for the needle and shrink the haystack.

Now, I’m going to turn it over to Rob to talk a bit more about how we are helping inform users about the ads they see on Facebook.

Rob Leathern: Thanks, Nathaniel. I’m Rob from the ads team. We believe people should be able to easily understand why they’re seeing ads, who paid for them, and what other ads that advertiser is running. Already, this summer, we’ve been working to make that possible. Last month, we introduced the ability to view a page’s active ads, no matter if they were targeted to you or not.

Now, you can tap a Page’s Info and Ads tab and see the ads that page is currently running, filter them by country, and report an ad by tapping the three dots in the top right-hand corner. Also, in the Info and Ads tab, you can see more information about the Page, like when it was created and the previous page names.

In May, we made big changes to the way ads related to politics or issues work on Facebook. Now, all election related or issue ads on Facebook and Instagram in the US must be purely labeled, including a paid-for-by disclosure from the advertiser at the top of the ad. This helps ensure that people can see who is paying for the ad which is especially important when the Page name doesn’t match the name of the company or person funding the ad.

We also introduced a searchable archive for political content that houses these ads for up to seven years, and a broad policy, to determine which ads go in the archive, which ads require a label, and the person placing them to confirm their identity. Some of the recent top keyword searches we’ve seen in the archive have been California, Clinton, Elizabeth Warren, Florida, Kavanaugh, North Carolina, and Trump.

This August, we’re set to provide an API for outside experts, researchers, academics, journalists and publishers. We’ll soon announce the details of how and with whom we’ll test the API. Following learnings from this test, we’ll share more on the criteria for access during the broader rollout. Offering an API is an important part of our efforts to give more transparency to people. We know these aforementioned groups can help by analyzing political advertising on Facebook.

Since May, we’ve been encouraged by the work of these exports to bring insights to light. Last Week, NYU published a report, analyzing political advertising on Facebook, using the archive, and many reporters [are] covering these political messages in new ways. For example, two days after Justice Kennedy announced his retirement, reporters were able to see which groups were advertising and how much they were spending.

Others have covered how potential 2020 presidential contenders are spending now to establish national profiles. This work all hinges from our principle that advertising should be transparent, and we’re going to err on the side of being more transparent. These are only first steps and we’re continuing to work to get better. For example, we added a separate section to the archive to distinguish between sponsored articles from news organizations and political ads.

We also continue to listen to people’s feedback on how they’re using these new tools and we’re working to improve them. Now, we’re actively exploring ways to streamline the authorization process and are clarifying our policy by providing examples on what ad copy would require authorization and a label and what would not. Enforcement won’t be perfect, but we can do more to help educate advertisers on this new policy and new process.

We know it’s our responsibility to help people learn more about content related to politics or issues of national importance that they’re interacting with on Facebook. And we’ll continue to take more steps to do so, whether that is expanding these tools to more countries or adding more functionality as more people use them.

Now, I’ll turn it back to Tom to wrap it up.

Tom Reynolds: Great, thanks, everybody. Just real quick, I understand there may have been a slight sound glitch for about 10 seconds when Rob was speaking. No need to worry, we’re going to post a transcript and the audio as soon as we can after the call, so if you missed any bit of that, you can find them verbatim in the Newsroom ASAP.

I know that was a lot of information, but we thought it was important that you, again, hear from us how we’re talking about these key issues.

And with that, we’re going to open it up for Q&A. Operator, do you want to walk through the instructions here?

Operator: At this time, I would like to inform everyone, in order to ask a question, press “star” then the number “one” on your telephone keypad. To withdraw your question, press the “pound” key. We will pause for a moment to compile the Q&A roster.

Your first question is from Elizabeth Dwoskin from Washington Post.

Elizabeth Dwoskin, The Washington Post: Hey, guys. Thanks for doing the call. I have two quick questions — one for Tessa and one for Rob. Tessa, the question is about the distribution question. Even if you’re not going to — you decide to give the content low distribution, it still means your algorithms are spreading it, just spreading it less? How precisely do you decide whose News Feed it should be on top of since you’re still spreading it to some degree?

And, Rob, the question for you is about the agreement to today, the legally binding agreement that you signed with Ferguson around racially and — racially discriminatory and other discriminatory ads. Why do you agree in the settlement to hold yourself legally liable?

Tessa Lyons: Thanks so much. I’ll chime and then turn it over to Rob. The News Feed ranking algorithm shows people information that they’ve chosen to connect with, whether its friends or Pages that they’ve chosen to follow. And it shows it in — that information — in order that is personalized based on what they find most meaningful. When we take action to reduce the distribution of misinformation in News Feed, what we’re doing changing the signals and predictions that inform the relevance score for each piece of content.

Now, what that means is that that information, that content appears lower in everyone’s News Feed who might see it, and so fewer people will actually end up encountering it. Now, the ranking algorithm is aiming to connect people with what they personally find meaningful. And we know from people that fake news, sensationalism, clickbait, ad farms aren’t meaningful, which is why we’ve chosen to take that action.

Rob Leathern: With relation to the Washington attorney general’s situation, I can’t speak to the specific legal piece, but I can say that unequivocally, that discriminatory advertising has no place on the Facebook platform; it’s against our policies.

We’ve already made a great deal of progress on the terms that we agreed to, which has included removing many targeting terms from our systems, and naturally we’re committed to ensuring that advertisers are going to use our tools responsibly; we think that’s obviously the right thing to do. We’re thankful that these issues have been raised and we’re committed to making further progress on this.

Tom Reynolds: Thanks, operator. Next question?

Operator: Your next question comes from Kevin Roose from the New York Times.

Kevin Roose, The New York Times: Hello, everyone. This question I guess is for Nathaniel, but anyone can weigh in. Leading up to the 2018 midterms, have you detected any activity that looks like a coordinated information operation coming from either the IRA or other actors, whether foreign or domestic? And what are some of the suspicious signs that you’re looking for specifically with regard to these elections?

Nathaniel Gleicher: Sure. I mean, we know that Russians and other bad actors are going to continue to try to abuse our platform — before the midterms, probably during the midterms, after the midterms, and around other events and elections. We are continually looking for that type of activity, and as and when we find things, which we think is inevitable, we’ll notify law enforcement, and where we can, the public.

Tom Reynolds: Great. Thanks. Operator, if we can take the next question?

Operator: Your next question comes from Dan Patterson with CBS News.

Dan Patterson, CBS News: Hey, Nathaniel. Great to talk to you again. I just wonder if there is a danger of using machine learning and artificial intelligence as a magic bullet, and how do we know what the KPRs are for success?

Nathaniel Gleicher: Part of why I talked about this combination of manual investigations and automated work is I certainly wouldn’t describe artificial intelligence or any other individual tool as a magic silver bullet. If you rely on any one tool in the sort of complicated challenge we face here, you’re not going to get a good outcome.

What we find is most effective is combining those. You use artificial intelligence, you use machine learning tools, you use automation more generally to do that shrinking of the haystack — to simplify the environment, to identify the patterns, and to run automated tools that can make the low-grade noise, the low-grade malicious activity harder to get through, and can make it easier for our sophisticated investigators to know where to look.

But you don’t just trust automated systems and artificial intelligence. That’s why we have a threat intelligence team of incredibly well-trained investigators which focuses on the types of threat actors we see around the world and the types of activities we see. By combining those two, that’s how you get the best results from what we see.

Operator: Your next question comes from the line of Jo Ling Kent with NBC.

Jo Ling Kent, NBC: Hi guys, thanks for taking this call. This is probably for Tessa or Samidh here. Even though you’re distributing this false news or fake information less, some of these pieces of information or fake Pages have resulted in harassment, trolling, basically negative experiences for some of your users. And that often leads to violations of your Community Standards. Why allow that content to be shared in the first place when basically the feedback loop for the user who’s being targeted seems so circular?

Tessa Lyons: This is why we believe that we need a multipart approach. We do remove anything that violates our Community Standards, and I want to be really clear about that — including if that information is misinformation that also violates standards, we will remove it, whether it’s accounts or content or behavior.

When it comes to information that doesn’t violate our Community Standards but that is demonstrably false, that’s where we don’t remove the information but instead reduce its distribution. And I think there’s a really important distinction between what’s allowed on Facebook and what is distributed on Facebook.

If you are who you say you are and you are not violating our Community Standards, you can post on Facebook. But getting distribution in News Feed is different, and that’s because we know people come to News Feed to connect with information that’s meaningful for them, and as I mentioned information that is fake or clickbait isn’t meaningful to them. That’s why we’ve worked to limit the distribution of that type of information in News Feed, and we’re already seeing progress but we certainly have more work to do.

Operator: Your next question comes from Joseph Menn with Reuters.

Joseph Menn, Reuters: Hi there. I guess this is probably for Tessa. Can you walk through me a little bit please for the InfoWars question, why are they something that is allowed on Facebook but the distribution is reduced?

And (kind) of explain — there’s not a lot of transparency here in this process — so what makes a site like that — a Page like that that is popular and says stuff that everybody knows is false and divisive, why — how do you decide whether one of those Pages gets to be followed, gets distribution on News Feed, and another doesn’t or has less? Is there any common metric — they get (whacked) 80% or 50% or something?

Tessa Lyons: Because of the approach that we take, which is to remove information that violates our Community Standards, and to remove Pages that repeatedly violate those standards, but to instead of removing, reduce information if it is simply false and to reduce the distribution of Pages that repeatedly share false information — the result of that approach is that there is information on Facebook that is false, and that many people — and I want to be clear, myself included — would find offensive.

Now, the reason that we strike that balance is because we believe we are working to strike the balance between expression and the safety of our community. And we think it’s a hard balance to strike, and it’s an area that we’re continuing to work on and get feedback on — and to increase our transparency around.

When it comes to transparency, anything that is marked false by a fact-checker generates a related article, as well as notifications for the Page admins and anyone else who shared it. We agree that there’s an opportunity for us to be more transparent, and we want to be more transparent about this work because we know that in order to contain misinformation we need civil society, government, academics — all working together, which is part of why we’ve committed to sharing more information with academics, so that we can all better understand the volume effect and progress that we’re making in the fight against misinformation.

Rob Leathern: I would just actually add — this is Rob — I’d add to that as well that we also plan to add more information to the Info and Ads tab for Pages. Today you can see when the Page was created, previous Page names, but over time we hope to add more context for people there in addition to the ads that that Page may have run as well.

Tom Reynolds: Thanks. Operator, we can take our next question.

Operator: Your next question comes from Julia Boorstin with CNBC.

Julia Boorstin, CNBC: Hi, thanks for taking the question. I just wanted to follow up on the question about the midterm elections. Just — could you give us a little bit more context about what you’re expecting to see and whether you’re — if you do anticipate any amped up indication of interference, if you expect to cut down on over all political advertising, and whether you’re willing to give up some legitimate political ad revenue or you think it’ll be necessary to give up some legitimate political ad revenue to ensure that there’s no interference?

Nathaniel Gleicher: Sure, that’s a great question. I mean, we — as I said before, and I just want to sort of emphasize it — we’re always looking for this sort of activity, and it’s fairly clear — we think it’s inevitable — that we will find evidence, and we will find other actors, whether these are from Russia, from other countries, or domestic actors that are looking to continue to try and abuse the platform.

As we find that, we’re committed to working with law enforcement and also with working with the public and telling the public about this. One nuance here is that because of the nature of these investigations, we actually — we always have to be careful about compromising the investigation — either our own or government’s — when we’re thinking about how to engage with the public around these issues.

Rob Leathern: And I’ll just add on the — on the revenue question, the political ads aren’t a large part of our business from a revenue perspective but we do think it’s very important to be giving people tools so they can understand how these ads are being used and give — give that transparency and, obviously, allow these — allow these ads on the platform with the — with the right control. We do think that’s very important but it is not a large part of our business.

Samidh Chakrabarti: And actually — this is Samidh, I’ll just add one more thing. With the archives of ads with political content that we made available for anybody to look at, this is actually a great way that we think that we can get more eyes on these ads and give the public a greater view into the type of political communication, paid political communication that’s taking place on the platform. And we think that the public and other watchdog groups that are out there can actually be helpful in preventing against any kind of future interference.

Nathaniel Gleicher: And I’d actually add that this is a great — this is Nathaniel — this is a great example of seeing behaviors that we’re seeing threat actors exploit and then making changes with automation in the product to make them more difficult. We’ve seen threat actors exploit the use of ads and we’re using — and we’re improving the transparency in the product to make it harder for them to do that. That feedback group is really core to how we are tackling this problem.

Operator: Your next question comes from Deepa Seetharaman with the Wall Street Journal.

Deepa Seetharaman, The Wall Street Journal: Thanks for doing this. I’ve got a question for Rob. I’m talking to a lot of campaign strategists coming up to the 2018 election. One theme that I continuously hear is that the more incendiary ads, do better but the effective CPMs on those particular ads are lower than, I guess, neutral or more positive messaging. Is that a dynamic that you guys are comfortable with? And is there anything that you’re doing to kind of change the kind of ads that succeeds through the Facebook ad auction system? Thanks.

Rob Leathern: Thanks for the question. Look, I think that it’s difficult to take a very specific slice of a single ad and use it to draw a broad inference which is one of the reasons why we think it’s important in the spirit of the transparency here to continue to offer additional transparency and give academics, journalists, experts, the ability to analyze this data across a whole bunch of ads.

That’s why we’re launching the API and we’re going to be starting to test it next month. We do believe it’s important to give people the ability to take a look at this data more broadly. That, I think, is the key here — the transparency and understanding of this when seen broadly will give us a fuller picture of what is going on.

Operator: Your next question comes from David McCabe with Axios.

David McCabe, Axios: Hello, I want to thank you for holding the call. I just have a quick follow up. Nathaniel, I appreciate the context around on how you’re addressing some of the midterm stuff, but I’m not entirely sure we got an answer to Kevin’s question which was have you actually seen evidence so far? You said you expected (stuff) before the midterms but have you seen evidence so far of any disinformation campaigns directed at disrupting the midterms? I was hoping you could give us just a yes or no on whether or not the company’s seen that.

Nathaniel Gleicher: I mean, as I said before, I mean, when we find things and as we find things — and we expect that we will — we’re going to notify law enforcement and we’re going to notify the public where we can.

Operator: Your next question comes from Cecilia Kang with the New York Times.

Cecilia Kang, The New York Times: Hi, thanks. I had a similar question. Let me just follow up on that. So in other words, you cannot tell us what you found so far? That’s kind of my takeaway.

Nathaniel Gleicher: I mean, I said before that we are looking for this sort of activity and these are ongoing investigations. And one of the things we have to be really careful with here is that as we think about how we answer these questions, we need to be careful that we aren’t compromising investigations that we might be running or investigations the government might be running.

Samidh Chakrabarti: This is Samidh. I’ll just add one quick thing here — and this is just one piece of the broader puzzle around election integrity that I think it’s important to realize. A lot of the work that we’ve done both around transparency of ads and Pages are actually structural disincentives for bad actors to try to meddle and interfere. And so I think you have to look at the full picture there to see the kinds of preventive things that we’ve been doing on the platform that we think have been very effective.

Operator: You’re next question comes from Steven Levy with WIRED.

Steven Levy, WIRED: Hi. Thanks for doing the call. I have one for Tessa and one for Nathaniel. Tessa, I wanted to know if you have been collecting any data that show that these efforts in identifying the false news and the sometimes questionable sources of information are having people actually read or click less on those posts? And for Nathaniel, I want to ask if you could tell us anything more about what the Digital Forensic Research Lab has been able to share with you regarding threats?

Tessa Lyons: I’ll chime in first and then turn it over. We made a change in December to the treatment that we were using to give people more information about what they were seeing on newsfeed when it came to false news. And we saw that when started showing related articles, so making the context from fact-checkers front and center in News Feed, people were less likely to share the false stories. We do have some data to show progress on the informed treatment as well as to show progress on the impacts of the demotion. But certainly in all of these areas there’s more research to be done.

Nathaniel Gleicher: And for the question about the DFR lab, I think — I mean, part of our conviction here is that tackling sophisticated information operators is a challenge and it’s a big challenge and it’s not something that any one organization can do by itself. And so we really focus on partnering with other companies in the tech sector, working with law enforcement and with independent research organizations like the Atlantic Council’s DFR Lab.

We have a team of investigators that look for malicious activity and we have a set of perspectives and tools that we can use to help us unearth that sort of activity but other teams like the DFR Lab have different sets of expertise and different tools. And what we’ve found is that by partnering with these organizations we can build off of their insights and then we can also add context so that as all of us together are working on it, we’re more effective at being able to tackle these threats.

I can’t speak directly to the particular types of information they might share or not because these are ongoing investigations and in particular information operations across social media are a changing challenge — that each investigation is a little bit different. But I think the key is, is that they’re able to bring a different perspective, access to different sets of information that help be another piece of the puzzle.

Operator: Your next question comes from Hannah Kuchler with the Financial Times.

Hannah Kuchler, The Financial Times: Hi. Thanks for having the call. This is a question for Tessa. Last week, you spoke about how you changed the policy to remove posts when they incite violence. Will you be transparent about that and publish when you decide to remove those posts?

Tessa Lyons: Last week we announced a new policy that we’re in the process of implementing. And just for those who don’t have context on it, the goal of that policy is to recognize that in some cases, there are forms of misinformation that have contributed to physical harm or imminent real world violence where we feel that we needed to make a policy change to not just reduce that distribution, but to actually remove it outright.

Since this is a new policy that we announced we’re in the process of implementing, I can’t speak to the details, but certainly as we’ve done across our Community Standards, we’re working to increase transparency of both the guidelines and processes as well as the results.

Operator: Your next question comes from Sheera Frenkel with the New York Times.

Sheera Frenkel, The New York Times: Hi. I was wondering if you have any intention going forward to be transparent about who is going — who is down-ranked and are you keeping track of the effect that down-ranking a Page or a person in the News Feed has and do you have those kinds of internal metrics? And then is that also something that you’ll eventually make public?

Tessa Lyons: I want to clear that the News Feed ranking algorithm takes into account hundreds of thousands of signals and dozens of predictions in order to inform the personal relevance score of a given piece of content for an individual user. If you’re asking specifically will we be transparent about the impact of fact-checking on demotions, we are already transparent about the rating that fact-checkers provide. And we have shared that when a piece of content is rated false by fact-checkers, it loses its future distribution by more than 80%.

As I also shared, we’re continuing to work with academics to provide them access to more data to help us better evaluate the collective effect of our efforts and we’re looking forward to those collaborations.

Operator: Your next question comes from Ashley Gold with Politico.

Ashley Gold, Politico: Hi. My question is mostly for Tessa. Back to the InfoWars thing, you said you’re still having conversations with people over what the best policy should be and going forward how you guys should handle false information. Do you see a future in which a Page like InfoWars will get taken down depending on how those conversations keep going and if there is public outcry or pressure from lawmakers or other stakeholders?

Tessa Lyons: What I can say is that if a piece of content or an account violates our Community Standards, it’s removed; if a Page repeatedly violates those standards, the Page is removed. On the side of misinformation — not Community Standards — if an individual piece of content is rated false, its distribution is reduced; if a Page or domain repeatedly shares false information, the entire distribution of that Page or domain is reduced.

Tom Reynolds: Thanks. Operator, we’re going to have time for two more questions, OK?

Operator: Our next question comes from Sarah Frier with Bloomberg.

Sarah Frier, Bloomberg: Hi. My question is also for Tessa. I’m wondering if you’ve made any improvements to the queue that fact-checkers look at to try to rate stories. I know in the past there were no duplicate links for subjects that were the same — basically the same lie in many different stories at once, that it included a lot of less important — have you made any sort of improvements of that queue? And then when you do reduce — following up on Sheera’s question — when you do reduce the distribution for a Page, do you let that Page owner know that their visibility is being is being reduced?

Tessa Lyons: Thanks, Sarah. In terms of the queue quality, so as Sarah is referring to, we predict potentially false information for fact-checkers to review. And this is an area where we’ve acknowledged we have more work to do in order to provide a more valuable service to those fact-checkers to help prioritize information that truly is potentially false. We’ve made some progress.

For example, we’re better leveraging machine learning in order to identify duplicates and make it more efficient for fact-checkers to not have to wade through a bunch of duplicates of the same information. That said, there’s a lot more work to do and this is a big priority for us over the next several months.

On the side of your second question, in terms of how we notify Pages when they share information that’s false, any time any Page or individual shares a link that has been rated false by fact-checkers, if we already have a false rating we warn them before they share, and if we get a false rating after they share, we send them a notification. We are constantly transparent, particularly with Page admins, but also with anybody who shares information about the way in which fact-checkers have evaluated their content.

Operator: Our last question comes from Kurt Wagner with Recode.

Kurt Wagner, Recode: Hey, thanks. This is probably — well, this is an ads question, probably for Rob. I’m wondering if you can give us a sense of what you’ve seen on political ad spend so far at this point in the 2018 midterms. Have advertisers been spending more than you expected, less than you expected, and how much of that is attributable to the changes that you’ve made about how you sell and show political ads?

Rob Leathern: Well I don’t think we’re breaking that out, but I do think we have definitely seen some folks have some (indigestion) about the process of getting authorized. We obviously think it’s an important tradeoff and it’s the right tradeoff to make. We’re definitely exploring ways to reduce the time for them from starting the authorization process to being able to place an ad.

We’re considering a preliminary location check that might expire after a certain amount of time, which would then become permanent once they verify their physical mailing address and receive the letter that we send to them. There’s definitely some tweaks and things that we’re going to — that we’re considering adding to the process, just, again, more from an advertiser-convenience perspective while maintaining the high standards we have for authorizations, and obviously maintaining the transparency for people in the ad archive.

Tom Reynolds: Before we wrap up, I just want to say thank you for, everybody, for joining the call. I know we tried to cram in a lot of information here in trying to answer as many questions as we can. We hope to do these briefings on a more regular basis to keep folks informed on the important work that’s going on around elections. Thank you for taking the time to join us.

If you have follow-up questions, you can reach us at This email address is being protected from spambots. You need JavaScript enabled to view it., and the transcript from the call as well as the Q&A will be available in the Newsroom ASAP. Thank you again and talk to you soon.