Today we removed multiple Pages, groups and accounts for coordinated inauthentic behavior on Facebook and Instagram. Some of this activity originated in Iran, and some originated in Russia. These were distinct campaigns and we have not identified any link or coordination between them. However, they used similar tactics by creating networks of accounts to mislead others about who they were and what they were doing.

We ban this kind of behavior because we want people to be able to trust the connections they make on Facebook. And while we’re making progress rooting out this abuse, as we’ve said before, it’s an ongoing challenge because the people responsible are determined and well funded. We constantly have to improve to stay ahead. That means building better technology, hiring more people and working more closely with law enforcement, security experts and other companies. Their collaboration was critical to our investigation since no one company can fight this on their own.

There is always a tension between taking down these bad actors quickly and improving our defenses over the long term. If we remove them too early, it’s harder to understand their playbook and the extent of their network. It also limits our ability to coordinate with law enforcement, who often have investigations of their own. It’s why we’ve investigated some of these campaigns for many months and why we will continue working to find out more. We’ll update this post with more details when we have them, or if the facts change.

What We’ve Found So Far
When to Take Action Against Cyber Threats
Sample Content
Press Call Transcript 

August 21, 2018

Updated on August 21, 2018 at 9:54PM PT to include additional sample posts.

What We’ve Found So Far

By Nathaniel Gleicher, Head of Cybersecurity Policy

We’ve removed 652 Pages, groups and accounts for coordinated inauthentic behavior that originated in Iran and targeted people across multiple internet services in the Middle East, Latin America, UK and US. FireEye, a cybersecurity firm, gave us a tip in July about “Liberty Front Press,” a network of Facebook Pages as well as accounts on other online services. They’ve published an initial analysis and will release a full report of their findings soon. We wanted to take this opportunity to thank them for their work.

Based on FireEye’s tip, we started an investigation into “Liberty Front Press” and identified additional accounts and Pages from their network. We are able to link this network to Iranian state media through publicly available website registration information, as well as the use of related IP addresses and Facebook Pages sharing the same admins. For example, one part of the network, “Quest 4 Truth,” claims to be an independent Iranian media organization, but is in fact linked to Press TV, an English-language news network affiliated with Iranian state media. The first “Liberty Front Press” accounts we’ve found were created in 2013. Some of them attempted to conceal their location, and they primarily posted political content focused on the Middle East, as well as the UK, US, and Latin America. Beginning in 2017, they increased their focus on the UK and US. Accounts and Pages linked to “Liberty Front Press” typically posed as news and civil society organizations sharing information in multiple countries without revealing their true identity.

Presence on Facebook and Instagram:  74 Pages, 70 accounts, and 3 groups on Facebook, as well as 76 accounts on Instagram. Followers:  About 155,000 accounts followed at least one of these Pages, 2,300 accounts joined at least one of these groups, and more than 48,000 accounts followed at least one of these Instagram accounts. Advertising:  More than $6,000 in spending for ads on Facebook and Instagram, paid for in US and Australian dollars. The first ad was run in Jan 2015, and the last was run in August 2018. Some ads have been blocked since the launch of our political ads transparency tools launched. We have not completed our review of the organic content coming from these accounts. Events:  3 events hosted. Content:  A sample of English-language posts is included below.

 

The second part of our investigation found links between “Liberty Front Press” and another set of accounts and Pages, the first of which was created in 2016. They typically posed as news organizations and didn’t reveal their true identity. They also engaged in traditional cybersecurity attacks, including attempts to hack people’s accounts and spread malware, which we had seen before and disrupted.

Presence on Facebook and Instagram: 12 Pages and 66 accounts on Facebook, as well as 9 accounts on Instagram. Followers:  About 15,000 accounts followed at least one of these Pages and more than 1,100 followed at least one of these Instagram accounts. Advertising:  We have found no advertising associated with these accounts or Pages. We have not completed our review of the organic content from these accounts. Events:  We have found no events associated with these accounts or Pages. Content:  A sample of Arabic-language posts is included below.

The third part of our investigation uncovered another set of accounts and Pages, the first of which was created in 2011, that largely shared content about Middle East politics in Arabic and Farsi. They also shared content about politics in the UK and US in English. We first discovered this set in August 2017 and expanded our investigation in July 2018 as we stepped up our efforts ahead of the US midterm elections.

Presence on Facebook and Instagram:  168 Pages and 140 accounts on Facebook, as well as 31 accounts on Instagram. Followers:  About 813,000 accounts followed at least one of these Pages and more than 10,000 followed at least one of these Instagram accounts. Advertising:  More than $6,000 in spending for ads on Facebook and Instagram, paid for in US dollars, Turkish lira, and Indian rupees. The first ad was run in July 2012, and the last was run in April 2018. We have not completed our review of the organic content coming from these accounts. Events:  25 events hosted. Content:  A sample of English-language posts is included below.

We’re still investigating, and we have shared what we know with the US and UK governments. Since there are US sanctions involving Iran, we’ve also briefed the US Treasury and State Departments. These sanctions allow companies to provide people internet services for personal communications, including the government and its affiliates. But Facebook takes steps to prevent people in Iran and other sanctioned countries from using our ad tools. For example, our systems screen every advertiser to identify their current location and whether they’re named on the US government’s list of sanctioned individuals. Based on what we learn in this investigation and from government officials, we’ll make changes to better detect people who try to evade our sanctions compliance tools and prevent them from advertising.

Finally, we’ve removed Pages, groups and accounts that can be linked to sources the US government has previously identified as Russian military intelligence services. This is unrelated to the activities we found in Iran. While these are some of the same bad actors we removed for cybersecurity attacks before the 2016 US election, this more recent activity focused on politics in Syria and Ukraine. For example, they are associated with Inside Syria Media Center, which the Atlantic Council and other organizations have identified for covertly spreading pro-Russian and pro-Assad content. To date, we have not found activity by these accounts targeting the US.

We’re working closely with US law enforcement on this investigation, and we appreciate their help. These investigations are ongoing – and given the sensitivity we aren’t sharing more information about what we removed.

 

Update on August 21, 2018 at 7:55PM PT

When to Take Action Against Cyber Threats

By Chad Greene, Director of Security

As soon as a cyber threat is discovered, security teams face a difficult decision: when to take action. Do we immediately shut down a campaign in order to prevent harm? Or do we spend time investigating the extent of the attack and who’s behind it so we can prevent them from doing bad things again in the future?

These questions have been debated by security experts for years. And it’s a trade-off that our team at Facebook has grappled with over the past year as we’ve identified different cyber threats — including the coordinated inauthentic behavior we took down today. There are countless things we consider in each case. How active is the threat? How sophisticated are the actors? How much harm is being done? And how will the threat play into world events? Here is a summary of what we have learned over the years – in many cases lessons that we have had to learn the hard way.

Who We Share Information With — and When

Cyber threats don’t happen in a vacuum. Nor should investigations. Really understanding the nature of a threat requires understanding how the actors communicate, how they acquire things like hosting and domain registration, and how the threat manifests across other services. To help gather this information, we often share intelligence with other companies once we have a basic grasp of what’s happening. This also lets them better protect their own users.

Academic researchers are also invaluable partners. This is because third-party experts, both individuals and organizations, often have a unique perspective and additional information that can help us. They also play an important role when it comes to raising the public’s awareness about these problems and how people can better protect themselves.

Law enforcement is crucial, too. There are cases where law enforcement can play a specific role in helping us mitigate a threat that we’ve identified, and in those instances, we’ll reach out to the appropriate agency to share what we know and seek their help. In doing this, our top priority is always to minimize harm to the people that use our services.

When we decide to take down a threat — a decision I’ll go into more below — we also need to consider our options for alerting the people who may have been affected. For example, in cases of targeted malware and hacking attempts that we know are being done by a sophisticated bad actor, like a nation state, we may put a notice at the top of people’s News Feed to alert them and make sure their account is safe. In the case of an attack that seeks to cause broader societal harm – like using misinformation to manipulate people or create division – where possible we share what we know with the press and third-party researchers so the public is aware of the issue.

When We’d Wait — And When We’d Act

When we identify a campaign, our aim is to learn as much as we can about: the extent of the bad actors’ presence on our services; their actions; and what we can do to deter them. When we reach a point where our analysis is turning up little new information, we’ll take down a campaign, knowing that more time is unlikely to bring us more answers. This was the case with the campaign that we took down today which was linked to Russian military intelligence services.

But if we’re still learning as we dig deeper, we’ll likely hold off on taking any action that might tip off our adversary and prompt them to change course. After all, the more we know about a threat, the better we’ll be at stopping the same actors from striking again in the future.

This is particularly true for highly sophisticated actors who are adept at covering their tracks. We want to understand their tactics and respond in a way that keeps them off Facebook for good. Amateur actors, on the other hand, can be taken down quickly with relative confidence that we’d be able to find them if they crop up elsewhere — even with limited information on who they are or how they operate.

Often, though, we have to take action before we’ve exhausted our investigation. For example, we’ll always move quickly against a threat when there’s an immediate risk to safety. So if we determine that someone is trying to compromise another person’s account in order to determine their location — and we suspect the target might be in physical danger — we’d take action immediately, as well as notify the person being targeted and law enforcement when appropriate.

These considerations don’t stop at physical harm. We also look at how a threat might impact upcoming world events. This sometimes means that we speed up taking something down because an event is approaching. This was the case when we removed 32 Pages and accounts last month. In other cases, this may mean delaying action before an upcoming event to reduce the chances that a bad actor will have time to regroup and cause harm.

Our Best Bet

Security experts can never be one hundred percent confident in their timing. But what we can do is closely consider the many moving pieces, weigh the benefits and risks of various scenarios, and make a decision that we think will be best for people on our services and society at large.

 

Back to Top
What We’ve Found So Far
When to Take Action Against Cyber Threats
Sample Content
Press Call Transcript