To Share or Not to Share: The Security Researcher's Dilemma

The End User Community is at the Mercy of Security Researchers to Act Responsibly

This year’s WannaCry ransomware attack as well as the massive Equifax data breach have highlighted the need for threat information sharing, security research collaboration, and open source security tools development. With any of these endeavors, there is always an inherent risk that information being disseminated to security professionals for defensive purposes could potentially be used by bad actors for nefarious uses. 

According to analyst firm Gartner (State of the Threat Landscape, 2017 by Greg Young) through the year 2020, 99 percent of vulnerabilities exploited will be known by security and IT professionals. This was the case in the Equifax data breach, whereby the attackers exploited a known vulnerability in the Apache Struts Web application software, a widely used enterprise platform. The Apache Software Foundation had made a patch available in March, giving the credit-reporting mammoth more than two months to take preventive measures that would have minimized the risk of exposure. 

So what steps can security researchers take to mitigate the risk of their findings and code being misused?

Security Vulnerability DisclosureAs part of their work, security researchers often compromise (i.e., hack) the defense mechanisms of a computer, network, or software system to prove that a given attack is possible and find a way to remediate the vulnerability before bad actors can exploit it. By publishing their research findings and associated exploits, researchers run the risk of their code being used by cyber criminals to create new exploits or finding its way into malicious tools.

To minimize the risk of being held responsible for someone misusing their research findings, the security research community has been applying two primary disclosure models:  

Under the full disclosure approach, security researchers publish the details of newly identified vulnerabilities as early as possible and make the information available to everyone without any restrictions, which typically includes publicly releasing it (inclusive of exploits created for a proof-of-concept) through online forums or websites. Proponents of the full disclosure approach argue that potential victims of a previously unknown vulnerability should have the same information as the attackers that can exploit it. 

An alternative approach, which is supported by most of the ISV community, CERT, and SANS Institute, is responsible disclosure. This approach takes a more collaborative stand, whereby the security researcher submits a vulnerability advisory report to the vendor, which documents the location of the vulnerability using screenshots or pieces of the code, supporting evidence, and may include a repeatable proof-of-concept attack to assist in testing a resolution. After submitting the report to the vendor using the most secure means possible (e.g., as an encrypted email), the researcher typically allows the vendor a reasonable amount of time to investigate and fix the vulnerability. Once a patch is available or the disclosure timeline has elapsed, the researcher will publish an analysis of his/her findings. 

In this context, CERT recommends not disclosing the exploit itself since their position is the number of people who can benefit from access to an exploit is small compared to the number of people who can be harmed if it is weaponized. The objective of the responsible disclosure approach is to balance the need of the public to be informed of security vulnerabilities with vendors' need for time to respond effectively. While a consensus on what represents a “reasonable amount of disclosure time” has not been reached, the majority of security researchers follow CERT’s recommended waiting period of 45 days.

Given the number of significant vulnerabilities being discovered in software on a daily and weekly basis, it’s clear that the end user community is at the mercy of security researchers to act responsibly in order to limit the potential for their findings to be used for malicious purposes. The increasing use of bug-bounty programs by the vendor community and greater frequency of penetration testing by end user organizations can further assist in discovering certain vulnerabilities before they become public. 

However, identifying vulnerabilities is only half the battle. Fixing them is a bigger challenge. That’s why Gartner states that “the single most impactful enterprise activity to improve security will be patching.” 

view counter
image
Torsten George is strategic advisory board member at vulnerability risk management software vendor, NopSec. Torsten has more than 20 years of global information security experience. He is a frequent speaker on cyber security and risk management strategies worldwide and regularly provides commentary and byline articles for media outlets, covering topics such as data breaches, incident response best practices, and cyber security strategies. Torsten has held executive level positions with RiskSense, RiskVision (formerly Agiliance), ActivIdentity (acquired by HID® Global, an ASSA ABLOY™ Group brand), Digital Link, and Everdream Corporation (acquired by Dell). He holds a Doctorate in Economics and a Diplom-Kaufmann degree.
Previous Columns by Torsten George:
Tags:
Original author: Torsten George