Just before midnight last Sunday evening (June 17, 2018), Elon Musk sent an email to all staff. He was dismayed, he said, to learn about a Tesla employee "making direct code changes to the Tesla Manufacturing Operating System under false usernames and exporting large amounts of highly sensitive Tesla data to unknown third parties."
This was a mainstream malicious insider attack -- but there may be more to it than meets the eye. The motive, according to Musk, was revenge: "he wanted a promotion that he did not receive." But this incident goes way beyond simple revenge sabotage, and includes the theft of sensitive data and the export of that data to unknown outside parties.
The incident could have been triggered by revenge and aggravated by bribery; but until and unless those outside parties can be identified for certain, the true cause of the attack will remain speculative.
Musk himself is willing to speculate with insinuation. "As you know," he told employees, "there are a long list of organizations that want Tesla to die. These include Wall Street short-sellers, who have already lost billions of dollars and stand to lose a lot more." He then added oil and gas companies, who "rumor has it... are sometimes not super nice;" and the "big gas/diesel car company competitors [who already cheat on pollution levels, and] maybe they're willing to cheat in other ways?" The only potential risks he excluded were nation-states wishing to give their own nascent industries a technology boost, cyber criminals wishing to ransom Tesla or sell to competitors, and -- dare we say it -- whistleblowing.
Such is the nature of attribution for cybercrimes, it may never be known who -- if anyone outside of the malicious insider himself -- is really behind the incident. Sometimes it is only national intelligence agencies who know who did what on the internet through their much wider access to signals intelligence -- but those same agencies can equally feel that it is not in the national interest to get involved. If it was a foreign nation dabbling in IP theft, the intelligence agencies might go public. If it was a competitor or major national industry, the agencies might take the view that their role is not law enforcement.
In reality, the destination of the stolen data may already be known.
The attack itself seems to be typical insider work, using false usernames. We don't know whether those false usernames were existing accounts, or new accounts created by the attacker. In either case, however, it seems certain that the attacker enjoyed higher system privileges than was necessary.
“This," comments Joseph Carson, chief security scientist at Thycotic, "is a major reminder why privileged access management (PAM) is a must-have for organizations that deal with sensitive information or personal information -- and why least-privilege is a practice being adopted by many organizations."
It's a problem made more difficult, he suggests, because companies try to protect the privileged accounts they know about, which in most cases isn't effective. "Organizations continue to fail at the most important aspect of restricting privileged access, which is proactively discovering privileged accounts in the environment. It appears that Tesla have failed to do that most important step in least-privilege, which is discovering and detecting unapproved privileged access."
Since Musk's original disclosure of the breach by internal email on Sunday, matters have moved forward rapidly. On Wednesday, Tesla filed a complaint against the employee -- named as Martin Tripp -- in the Nevada District Court. This complaint admits that "Tesla has only begun to understand the full scope of Tripp’s illegal activity, but he has thus far admitted to writing software that hacked Tesla’s manufacturing operating system (“MOS”) and to transferring several gigabytes of Tesla data to outside entities."
Within a few months of Tripp joining Tesla, says the complaint, "his managers identified Tripp as having problems with job performance and at times being disruptive and combative with his colleagues. As a result of these and other issues, on or about May 17, 2018, Tripp was assigned to a new role. Tripp expressed anger that he was reassigned. Thereafter, Tripp retaliated against Tesla by stealing confidential and trade secret information and disclosing it to third parties, and by making false statements intended to harm the company."
But according to a report published today by the BBC, Tripp "says he’s a whistleblower being smeared for speaking out about standards and safety at the company, and deserves protection." The implication is that Tripp provided the documents used by Business Insider in its June 4 report; 'Internal documents reveal Tesla is blowing through an insane amount of raw material and cash to make Model 3s, and production is still a nightmare'.
The BBC also publishes extracts from a rapid-fire email exchange between Musk and Tripp that took place on Wednesday. At one point, Musk writes, "You should ashamed of yourself for framing other people. You're a horrible human being." This is likely a reference to Tripp's hacking software being found on three other employees' computers. The legal complaint alleges, "His hacking software was operating on three separate computer systems of other individuals at Tesla so that the data would be exported even after he left the company and so that those individuals would be falsely implicated as guilty parties."
Tripp responded, "I NEVER 'framed' anyone else or even insinuated anyone else as being involved in my production of documents of your MILLIONS OF DOLLARS OF WASTE, Safety concerns, lying to investors/the WORLD. Putting cars on the road with safety issues is being a horrible human being!"
Whistleblowing is one optional reason for the data theft not mentioned by Musk in his June 17 email to staff, even though the Business Insider allegation mentions 'internal documents' and was published two weeks earlier. The full truth of what happened in this incident is likely to be exposed in court rather than via computer forensics.
However, in information security terms, an insider stole sensitive documents from Tesla. The motive is not as important as the act. It seems that Tesla does not operate adequate least-privilege measures, and does not have an internal traffic monitoring system capable of detecting and blocking the unsanctioned exfiltration of gigabytes of data. This failure has left Tesla with a PR nightmare that it must now manage.
Related: Tesla Model X Hacked by Chinese Experts
Related: Pink-haired Whistleblower at Heart of Facebook Scandal
Related: Booz Allen Hamilton Confirms Termination of NSA Whistleblower
A security vulnerability patched by Microsoft earlier this month in its Edge browser could be exploited via malicious or compromised websites to read restricted data.
Tracked as CVE-2018-8235, the flaw occurs in how “Microsoft Edge improperly handles requests of different origins,” Microsoft explains in an advisory. The issue results in Edge bypassing Same-Origin Policy (SOP) restrictions and allows for requests that should otherwise be ignored.
As a result, an attacker could exploit the vulnerability to force the user’s browser to send data otherwise restricted. Attacks could be performed via maliciously crafted websites, compromised domains, or through websites that accept or host user-provided content or advertisements.
The vulnerability was discovered by Google developer Jake Archibald, who named it Wavethrough, because the bug occurs when a site uses service workers for the loading of multimedia content, and the < audio > web API, which makes use of “range” requests.
The Range headers can be used by “media elements if the user seeks the media, so it can go straight to that point without downloading everything before it,” Archibald explains.
What the security researcher discovered was that, via a service worker, the Range header was missing, because media elements make “no-cors” requests.
“If you fetch() something from another origin, that origin has to give you permission to view the response. By default the request is made without cookies, and if you want cookies to be involved, the origin has to give extra permission for that,” he notes.
When using special headers, the browser might also check with the origin before making the request, but some APIs ignore the checks, which could result in sensitive data being leaked. No-cors request are sent with cookies and receive opaque responses, and some APIs may access the data in these responses.
Thus, when a media element makes a no-cors request with a Range header, fetch() removes the header, because it isn’t allowed in no-cors requests. However, because Range requests were never standardized in HTML, and because service workers are involved, a website could respond to them arbitrary.
“You can respond to a request however you want, even if it's a no-cors request to another origin. For example, you can have an <img> on your page that points to facebook.com, but your service worker could return data from twitter.com,” the researcher explains.
After setting up a website that would do just that, Archibald discovered that the beta and nightly versions of Firefox allowed the redirect and eventually exposed the duration of the requested audio. The bug was patched before it made it to the stable Firefox release.
Edge too was found vulnerable, but it also allowed the resulting audio to pass through the web audio API, thus allowing for the monitoring of the samples being played. Because the request is made with cookies, the attack revealed content otherwise accessible only if the user is logged in.
“It means you could visit my site in Edge, and I could read your emails, I could read your Facebook feed, all without you knowing,” the researcher points out.
In addition to getting the bug addressed in Firefox and Edge, Archibald has been working on changing the standards regarding Range requests, so as to eliminate similar security issues. Furthermore, his discovery resulted in CORB being added to fetch().
Related: Microsoft Patches 11 Critical RCE Flaws in Windows, Browsers
Related: Microsoft Patches Code Execution Vulnerability in wimgapi Library
At a recent industry conference I heard some commentary about the “disappearance” of ransomware, but I’m here to assure you that that isn’t the case. It’s true that some criminal gangs have switched to distributing cryptocurrency miners instead of ransomware (for now, I emphasize), as such mining is currently more difficult for many security systems to detect, and it’s proving extremely profitable to the criminals, which is all that matters.
A May survey showed phishing has surpassed ransomware as a concern for IT security managers, and for understandable reasons—the number of phishing emails reaching users keeps rising, and phishing is the top source of breaches at companies.
But don’t think ransomware is going to go quietly or go away at all. The narrative of the decline of ransomware is being driven in part by the decrease in mass, botnet-driven mailings sent in the tens of millions, which are spectacular and generate headlines. But it’s important to balance that narrative, focused on the decline in the sheer volume of ransomware distributions, with the understanding that during this past year there has also been an increase in the overall number of ransomware variants in circulation and more varied distribution methods in use, with each campaign typically targeting smaller audiences in the tens of thousands.
Ransomware Not Dead, Just Less Open
Also consider that other factors are making ransomware activity less obvious and tracking it more difficult. There is a shift underway away from Bitcoin to cryptocurrencies that are even more private — Monero and Dash wallets aren’t open, we can’t any longer just look at publicly visible Bitcoin wallets and see ransomware at work. That switch is also being pushed by Bitcoin-to-dollar currency exchange platforms being shut down, like a Russian bitcoin platform which was recently seized. The move to better hide the payment trail extends to a trend in ransomware notes, which less frequently contain explicit payment instructions, but have switched to use of email or “bitmessaging” to communicate the payment details. In any event, the criminals realized that being brazenly open in this way was best avoided.
Fileless Ransomware Living Off the Land
What’s referred to as “fileless” malware is being seen a lot, and my colleagues and I have been discussing why it hasn’t crossed over more heavily into ransomware yet. When it does, it will get through a lot of industry defenses. To recap, “fileless” refers to methods and capabilities that, in order to evade detection, curtail the number of malicious artefacts written to the file system. This is made possible by applying the principle of “living off the land,” where pre-installed system tools like Powershell and PsExec, with their powerful capabilities and privileges, are leveraged to execute the malicious payload, instead of the ransomware code bringing along a lot of heavy baggage. At this point, all the major ransomware families (Cerber, GrandCrab, Locky, et al.) have had “living off the land” capabilities added to them, mostly by embedding Powershell scripts that download and execute the payload. Powershell and other system tools can also be used for the ransomware encryption itself, as first seen in Poshcoder in 2014 (which, because of poor programming, destroyed the files in the encryption process...), and later in its “improved” version, Powerware.
Combined, living-off-the-land and fileless injection methods create the potential to launch an attack where no malicious executable is ever saved to disk. Fully “fileless” attacks are by nature volatile and will not survive a reboot or removal. The problem with ransomware is that it has no need for persistence. Running it once is enough for cybercriminals to reach their goals.
Most Next Big Things Have Already Happened
As you see in the Poshcoder example from 2014 (which, by the way, still has a very low detection rate in the industry), fileless ransomware isn’t a brand new idea. And last year we saw UIWIX, which was essentially Wannacry adapted to run in memory, although it didn’t spread much. Perhaps as a better indicator of what’s coming, at the end of April we found a sample of SynAck, which has been around since 2015, newly adapted to run in a fileless manner using a technique called Process Doppelgänging.
History shows that, in security, the next big thing isn’t always an entirely new thing. We have precedents—macro malware existed for decades before it really became a “thing.”
So we are of the school that it is only a matter of time. A reasonable theory is that we haven’t seen it en masse, or at least in a significant number of smaller distributions, because other opportunities have presented themselves (like cryptomining), and what’s worked up until recently to evade detection for ransomware is still working in enough scenarios. “Fileless” is leaner and smarter, so it’s a move we should expect and for which we should be prepared.
Use of Hidden Tunnels to Exfiltrate Data Far More Widespread in Financial Services Than Any Other Industry Sector
Financial services have perhaps the largest cyber security budgets and are the best protected companies in the private sector. Since cyber criminals generally have little difficulty in obtaining a quick return on their effort, it would be unsurprising to find that financial services are less overtly targeted by average hackers than other, easier targets. At the same time, the data held by finserv is so attractive to criminals that it remains an attractive target for more sophisticated hackers.
Both premises are confirmed in a report (PDF) published this week by Vectra. From August 2017 through January 2018, Vectra's AI-based Cognito cyberattack-detection and threat-hunting platform monitored network traffic and collected metadata from more than 4.5 million devices and workloads from customer cloud, data center and enterprise environments.
An analysis of this data showed that financial services displayed fewer criminal C&C communication behaviors than the overall industry average. This could be caused by the efficiency of large finserv budgets (Bank of America spends $600 million annually, with no upper limit, while JPMorgan Chase spends $500 million annually) warding off basic criminal activity.
Even the much smaller Equifax has a budget of $85 million. But Equifax, with its massive 2017 loss of 145.5 million social security numbers, around 17.6 million drivers' license numbers, 20.3 million phone numbers, and 1.8 million email addresses, demonstrates that finserv is a target for, and can be successfully breached by, the more advanced hackers.
Vectra analyzed the Equifax breach and then compared the attack methodology to what its Cognito platform was finding in other financial services companies -- and it discovered the same breach methodology in other financial services firms. This is the use of hidden tunnels to hide the C&C servers and disguise the exfiltration of data.
Vectra's new analysis shows that the criminal use of hidden tunnels is far more widespread in financial services than in any other industry sector. Across all industries Vectra found 11 hidden exfiltration tunnels disguised as encrypted web traffic (HTTPS) for every 10,000 devices. In finserv, this number jumped to 23. Hidden HTTP tunnels jumped from seven per 10,000 devices to 16 in financial services.
Chris Morales, head of security analytics at Vectra, commented, "What stands out the most is the presence of hidden tunnels, which attackers use to evade strong access controls, firewalls and intrusion detection systems. The same hidden tunnels enable attackers to sneak out of networks, undetected, with stolen data."
"Hidden tunnels are difficult to detect," explains the report, "because communications are concealed within multiple connections that use normal, commonly-allowed protocols. For example, communications can be embedded as text in HTTP-GET requests, as well as in headers, cookies and other fields. The requests and responses are hidden among messages within the allowed protocol."
These hidden tunnels need to be protected at all times, says Will LaSala, director security solutions and security evangelist at OneSpan. "Many app developers put holes through firewalls to make services easier to access from their apps, but these same holes can be exploited by hackers. Using the proper development tools, app developers can properly encrypt and shape the data being passed through these holes."
One of the problems is that developers are rushed to implement a new feature to maintain or gain customers, "and this," he adds, "often leads to situations where a hidden tunnel is created and not secured."
Once a hidden tunnel is established by an attacker, it is almost impossible to detect with traditional security. There is no signature to detect while specially created C&C servers will unlikely show up on reputation lists. Furthermore, because the traffic using a hidden tunnel is ostensibly legitimate traffic, there is no clear anomaly for anomaly detection systems to detect.
What Vectra's analysis shows is that while there may be fewer overt attacks against financial services, the industry is a prime target for advanced hackers willing and able to invest in more covert attacks.
San Francisco, Calif-based Vectra Networks closed a $36 million Series D funding round in February 2018, bringing the total amount raised to date by the company to $123 million.
Related: The Intruder's Kill Chain - Detecting a Subtle Presence
A series of cyber-attacks targeting the Middle Eastern region use an encrypted downloader to deliver a Metasploit backdoor, AlienVault reports.
The attacks start with a malicious document containing parts of an article about the next Shanghai Cooperation Organization Summit, originally published at the end of May on a Middle Eastern news network.
The Office document contains malicious macro code designed to execute a Visual Basic script (stored as a hexadecimal stream) and launch a new task in a hidden Powershell console. This attack stage is meant to serve a .NET downloader that uses a custom encryption method to obfuscate process memory and evade antivirus detection.
Dubbed GZipDe, the downloader appears based on a publicly available reverse-tcp payload to which the malware author added a new layer of encryption payload.
“It consists of a Base64 string, named GZipDe, which is zip-compressed and custom-encrypted with a symmetric key algorithm, likely to avoid antivirus detection,” AlienVault reveals.
A new memory page with execute, read and write privileges is created, then a decrypted payload is executed. Courtesy of a special handler that controls process’ access to system resources, only one instance of the malware can run at the same time.
Shellcode in the downloader connects to a server at 175.194.42[.]8 to deliver the final payload. The server wasn’t up during analysis, but it was previously recorded serving a Metasploit payload, the security researchers note.
Metasploit has become a popular choice among threat actors, and was previously seen being used in targeted attacks associated with the Turla hackers.
The Metasploit payload delivered from 175.194.42[.]8, AlienVault says, contains a shellcode to bypass system detection, as well as a Meterpreter payload. This malicious program is a powerful backdoor capable of gathering information from the system. The malware also stays in contact with the command and control server to receive further commands.
The shellcode, the researchers explain, loads the entire DLL into memory, meaning that it works without writing information to the disk.
Called reflective DLL injection, this technique allows the attacker to “transmit any other payload in order to acquire elevated privileges and move within the local network,” AlienVault concludes.
Related: Kardon Loader Allows Anyone to Build a Distribution Network
Automating Threat Intelligence Prioritization Allows You to Proactively Deploy Appropriate Intelligence to the Right Tools
As a security analyst, you’re probably stuck in the security operations doldrums. You spend 80 percent of your time doing repetitive, administrative tasks and only 20 percent (if you’re lucky!) on investigative, challenging and rewarding work that stops the bad guys and keeps your organization more secure. Security leaders suffer the effects of the security operations doldrums as well. Here’s an all too familiar scenario.
Every day security teams are bombarded with a massive amount of log and event data from each point product within your layers of defense and/or your SIEM. Not to mention the millions of threat-focused data points from commercial sources, open source, industry and existing security vendors that can be used to contextualize and prioritize these alerts. The noise level is deafening. You can’t – nor should you – investigate everything. So, you go with your gut, and pursue what seems high priority. You start the tedious and time-consuming process of manually correlating logs and events to see what is relevant and merits further investigation. Inevitably you uncover conflicting information, and confusion mounts as data and activity is referred to differently by different systems and teams across your security operations. This happens all day, every day.
Security leaders must deal with the fallout. From a security perspective, your teams are bogged down in mundane tasks and reacting to alerts. They don’t have the time to combat real threats and conduct investigations quickly to mitigate risk, or to proactively strengthen defenses. From a human resources perspective, you also feel the pain. The existing cybersecurity talent shortage makes it difficult to hire more people to share the burden and equally difficult to retain the talent you have. As employees become less engaged they are less productive and more likely to leave. Turnover is expensive – costing companies up to 200 percent of an employee’s annual salary. So how do you overcome the security operations doldrums? Flip the equation so security teams spend 80 percent of their time on investigative, challenging and rewarding work and only 20 percent on repetitive, administrative tasks. To do this you need automation.
There’s a lot of talk in the security industry about automation. It helps you get more from the people you have – handling time-intensive manual tasks so they can focus on high-value, analytical activities. But if you automate too late in the security lifecycle your efforts could backfire. You need to introduce automation early, beginning with contextualizing alerts and events for prioritization. And how do you add context? Through threat intelligence, which has become the foundation to the activities your security operations center handles.
By starting with prioritization of threat intelligence to ensure relevance to your company and your environment, you can then understand which alerts are higher in priority than others. Because you need to use multiple sources of threat intelligence, you need a single platform to manage and automate the process. Applying automation to score and prioritize the massive amounts of threat data teams are bombarded with continuously not only eliminates a lot of the manual tasks to determine relevance, it also helps cut down on the noise. Intelligence feed vendors may provide “global” scores but, in fact, these can contribute to the noise since the score is not within the context of your company’s specific environment. Worse yet, when uploaded to your SIEM or sensor grid they can generate more noise in the form of false positives and security operators end up chasing ghosts. By applying automation early, security analysts have the context they need to understand real threats faster and can investigate only the high priority alerts.
Automating threat intelligence prioritization also allows you to proactively deploy the right intelligence to the right tools with greater speed and confidence. You can immediately and automatically update your sensor grid (i.e., firewalls, IPS/IDS, routers, web and email security, endpoint, etc.) and alleviate much of the manual and fragmented effort typically required. Security personnel can stay focused on their priorities instead of having to stop what they are doing to log into each tool to upload, test and deploy the latest intelligence. Depending on the amount of data and your security infrastructure, automation can optimize processes that would otherwise require a small army of full-time security analysts to do manually.
When security teams and automation come together early in the security lifecycle, you’re positioned for success. You spend less time in alert triage and more time using threat intelligence to accelerate security operations. That 20 percent becomes 50, 60 or even 80 percent, allowing you to do more with the talent you have – and do it better and faster. With security effectiveness measured by mean time to detection (MTTD) and mean time to response (MTTR), shedding the security operations doldrums mitigates risk to your organization and your team.
Network attacks exploiting a recently patched Drupal vulnerability are attempting to drop Monero mining malware onto vulnerable systems, Trend Micro reports.
Tracked as CVE-2018-7602 and considered a highly critical issue that could result in remote code execution, the vulnerability impacts Drupal’s versions 7 and 8 and was addressed in April this year.
The flaw is dubbed Drupalgeddon3 and the patch for it only works if the fix for the original Drupalgeddon2 vulnerability (CVE-2018-7600) has been applied.
Last month, hackers were observed targeting both security vulnerabilities to deliver a variety of threats, including cryptocurrency miners, remote administration tools (RATs) and tech support scams.
Trend Micro now says they noticed network attacks exploiting CVE-2018-7602 to turn affected systems into Monero-mining bots. As part of the observed incidents, the exploit fetches a shell script that retrieves an Executable and Linkable Format-based (ELF) downloader.
The malware adds a crontab entry to automatically update itself and also retrieves and installs a Monero-mining application, a modified variant of the open-source XMRig (version 2.6.3). The use of XMRig is a feature common to most attacks attempting to mine for Monero.
The downloader also checks the target machine to determine whether it is worth compromising.
When executed, the mining application changes its process name to [^$I$^] and accesses the file /tmp/dvir.pid, Trend Micro says.
“This is a red flag that administrators or information security professionals can take into account to discern malicious activities, such as when deploying host-based intrusion detection and prevention systems or performing forensics,” the security firm notes.
The actors behind this attack hide behind the Tor network, but Trend Micro says they were able to trace the activity to 197[.]231[.]221[.]211, an IP belonging to a virtual private network (VPN) provider. This IP address is a Tor exit node.
Over the past month, the security firm has blocked 810 attacks coming from this IP address, but cannot confirm that they were all related to the Monero-mining payload or performed by the same actor.
Most of the attacks attempt to exploit the Heartbleed vulnerability (CVE-2014-0160), while others target ShellShock (CVE-2014-6271), a flaw in WEB GoAhead (CVE-2017-5674), and an old memory leak in Apache (CVE-2004-0113).
“Trend Micro also blocked File Transfer Protocol (FTP) and Secure Shell (SSH) brute-force logins from this IP address. Note that these attacks exploit even old Linux or Unix-based vulnerabilities, underscoring the importance of defense in depth,” the security researchers warn.
Patched Drupal installations should be safe from the recent attacks and site admins are advised to apply the available patches as soon as possible, to ensure their systems remain secure.