DeepPhish Project Shows Malicious AI is Not as Dangerous as Feared

Artificial intelligence (AI) is increasingly becoming a de facto necessity for cybersecurity. The asymmetric nature of cyberattacks simply overwhelms traditional manual analyst defenses, and organizations must increasingly use AI and machine learning (ML)-enhanced technologies to detect known attacks and predict (determine the probability) of new and unknown attacks at machine speed.

Marketers say this gives the defenders, possibly for the first time, an advantage over the attackers. Security researchers accept this, but warn the attackers are already beginning to use their own AI -- and this will swing the advantage back to the attacker. Alejandro Correa, chief data scientist and VP of AI & Research at Cyxtera, questions both assertions for their lack of scientific evidence.

Correa set up a project, called DeepPhish, to examine the extent to which ML technologies can genuinely aid in the detection of phishing, and the extent to which those same technologies could be used by cybercriminals to by-pass anti-phishing defenses. DeepPhish is the name given to the potential malicious use of AI to aid criminal phishing campaigns.

In a sense it was a classic red team/blue team project. The red team was tasked with developing DeepPhish, while the blue team would later be tasked with mitigating DeepPhish.

Correa presented his findings at Black Hat Europe this week, and afterwards talked to SecurityWeek about DeepPhish in particular, and the potential use of ML by bad guys in general.

He and his team chose to base the project on phishing for two reasons. Firstly, it is a major security issue, with between 91% and 93% of all cybercrimes and cyber-attacks starting with a phishing email. Secondly, the team had access to a vast pool of data through PhishTank that could be used as fodder for the machine learning.

The project did not look at spear-phishing, for the precise reason declared in last year's paper winning the Facebook Internet Defense Prize at the 26th USENIX Security Symposium in Vancouver, BC: "With such a small number of known spearphishing instances, standard machine learning approaches seem unlikely to succeed: the training set is too small and the class imbalance too extreme." The size of the data set employed by for machine learning is fundamental to its success.

An analysis of the phishing URLs obtained from PhishTank disclosed 106 URLs used to launch more than 1000 attack URLs by a single criminal gang -- which the Cyxtera team dubbed the Purple Rain Gang. This gang was particularly successful, achieving a success rate of 0.69%, almost three times the overall all-phishing rate. A second 'gang' (the Maple Gang) was less prolific, but even more effective, achieving a success rate of almost 4%.

"The red team," Correa told SecurityWeek, "was very proficient in AI tools, and was told, 'You guys put yourselves in the attackers' shoes, and try to use ML-technologies to be able to improve the attackers' effectiveness'."

The red team selected two different threat actors that had been targeting various financial institutions. The first was used for DeepPhish development, and the second to confirm its effect. The team was able to see both actors' attacks over a period of 18 months. From the first, the more prolific gang, the red team built a machine learning model -- or more specifically a deep learning model -- that learns what makes a successful attack.

The result was astonishing. Guided by what DeepPhish had learned, the Purple Rain Gang's fraud effectiveness at defeating current defenses -- even AI-based defenses -- increased by 3,000%, from 0.69% to 20.9%. Used on the second group, effectiveness improved from 4.9% to 36.28%. The conclusion is inevitable: the criminal use of artificial intelligence will dramatically improve attacks, and return the advantage to the attacker.

But it was now the turn of the blue team. This time, the defenders were able to retrain their 'good AI' to take account of DeepPhish's success. They put this new and improved anti-phishing technology up against DeepPhish in a final AI vs. AI battle. And the good guys -- the blue team -- won. Their new enhanced anti-phishing technology successfully reduced the effectiveness of DeepPhish's attacks. In the end, said Correa, the good anti-phishing team came out on top, even against malicious AI technology.

The conclusion from this project is that AI is a silver bullet for neither attackers nor defenders -- it's just another new technology. The history of cybersecurity has always been one of leapfrog by attackers and defenders, with the latest technology having the edge. For now, new AI defenses are achieving a degree of success against cybercriminals. This will only continue until the attackers develop even newer AI-enhanced attacks -- and then they will succeed. What Correa's project proves is that this isn't the end, but perhaps just the beginning of a new game of leapfrog -- with the latest AI algorithms always having the advantage. 

The implication, at least in the short term, is that if bad actors develop new AI-enhanced attacks in other areas, they will achieve increased success until defenders produce their own response to the new attacks. But there are not many areas that lend themselves to machine-learning techniques. "The cases in which attackers are actually using AI are those in which they can put their funds into a lot of data. Phishing is one example, malware is another," commented Correa.

That doesn't mean that other attack vectors are impossible. For example, spear-phishing could be enhanced by using machine learning to analyze social media to select attractive targets. Correa believes that this is technically possible, but very hard to achieve. The difficulty would be in finding a way to label the data in a manner suitable for consumption by a machine learning algorithm.

"It would be a very resource-intensive process," Correa told SecurityWeek, "but it is definitely something that could be done by an attacker with limitless resources. I would not expect this type of attack to be developed by an average bad actor. It would have to be an organization with vast resources -- but in the final analysis, I would expect such an approach to be way more successful than the average spear-phish."

The standard cybercriminal simply does not have limitless resources, so, cyber death by malicious AI will not happen. In Correa's own words on DeepPhish, "The anti-phishing AI came out on top in the experimental battle with malicious AI. How does this translate to a situation in which real-world attackers acquire AI technology? Simply put: anti-phishing technology with AI will always come out on top."

Related: Bot vs Bot in Never-Ending Cycle of Improving Artificial intelligence 

Related: The Malicious Use of Artificial Intelligence in Cybersecurity 

Related: IBM Describes AI-powered Malware That Can Hide Inside Benign Applications 

Related: Is Malware Heading Towards a WarGames-style AI vs AI Scenario? 

Original author: Kevin Townsend