In Development
Receive Invite When Available
Receive Invite When Available
Receive Invite When Available
The latest edition of the ISMG Security Report leads with the backstory on why U.S. authorities nabbed the "accidental hero" who stopped the WannaCry malware outbreak.
In the Security Report (click on player to listen), you'll hear:
The ISMG Security Report appears on this and other ISMG websites on Tuesdays and Fridays. Check out our Aug. 1 and Aug. 4 reports that respectively analyzed the human element behind malware and the shifting battle on Russian hacking moving to American courts.
The next ISMG Security Report will be posted on Friday, Aug. 11.
Theme music for the ISMG Security Report is by Ithaca Audio under a Creative Commons license.
Six years ago the US National Institute of Standards and Technology (NIST) put forth a framework for information security continuous monitoring (ISCM), defined as maintaining ongoing awareness of information security, vulnerabilities and threats to support organizational risk management decisions. The framework describes tools and technologies to support continuous monitoring with one of the objectives being to maintain awareness of threats and vulnerabilities.
The recommended technologies are mainly focused on monitoring activity inside the organization and looking for known threats for which a signature exists, both of which are critical. But to get a comprehensive assessment of risk, you also need to consider what’s happening outside the organization. Continuous threat assessment, which I’ll discuss more below, addresses this gap, allowing you to understand the threats to your organization as they emerge and evolve and how they affect your risk level. It’s kind of like how you approach driving a car. Based on gauge readings you alter your speed, fuel up, or take care of maintenance issues. But you’re also always attuned to changing weather, road and traffic conditions – factors which also cause you to adjust your driving practices. Continuously considering both internal and external factors mitigates risk and increases your chances of arriving to your destination safely.
Broadening the scope of risk assessment, the opening keynote at the Gartner Security & Risk Management Summit 2017 focused on CARTA, continuous adaptive risk and trust assessment, to manage the increasing risk associated with the digital world. CARTA complements the NIST framework with a process that spans the business – from how companies develop technology products to external partners along the supply chain. The CARTA process involves continuously assessing your ecosystem risk, which extends beyond the walls of the enterprise, and adapting as necessary.
Mitigating risk in the digital world is a challenge vexing more and more security teams. New research by ESG found that 26 percent of cybersecurity professionals claim that security analytics and operations is more difficult than it was two years ago because the threat landscape is evolving so rapidly that it is difficult to keep up. A recent report from Cisco corroborates this sentiment stating that security experts are becoming increasingly concerned about the accelerating pace of change and sophistication in the global cyber threat landscape. Citing two dynamics, the escalating impact of breaches that are designed for destruction of service, and the pace and scale of technology, the report goes on to say: “it is important for defenders to understand changes in adversaries’ tactics so that they can, in turn, adapt their security practices and educate users.”
So how do you go about understanding adversaries’ tactics so that you can adapt? And how do you do it on an ongoing basis? As you know, adversaries are dynamic and, as such, you need to continuously monitor and assess the threat and the tactics. CARTA specifies the use of analytics and automation to detect and respond to malicious activity other systems miss, and to help overburdened teams better protect their organizations by focusing limited resources on the most relevant threats. To do that you need to start with threat data. This doesn’t necessarily mean you need more threat feeds. Most organizations typically have more threat intelligence than they know what to do with from commercial sources, open source, industry and existing security vendors. Not to mention the massive amount of log and event data from each point product within their layers of defense.
What you need is a way to aggregate global threat data and translate it into a uniform format for analysis and action. You then need to augment and enrich it with additional internal and external threat and event data. By correlating events and associated indicators from inside your environment with external data on indicators, adversaries and their methods, you gain additional and critical context to understand adversaries’ tactics – activity that flies under the radar of rules-based prevention tools.
Aggregating and enriching the data provides vital insights, but it does not filter out unwanted noise in the large amounts of resulting data. This is where prioritization is critical. So that you can determine where to focus and how to manage risk most effectively, prioritization needs to be defined by you as each company has different criteria and a unique risk profile. With the ability to change risk scores based on parameters you set (for example around indicator source, type, attributes and context, as well as adversary attributes) you can prioritize threat intelligence and filter out noise.
Prioritization also needs to be done on an ongoing basis, and automatically, as threat assessment is a continuous process. As the threat landscape dynamically changes along with your internal environment, you keep adding more data and context to your repository as well as learnings about adversaries and their tactics, techniques and procedures (TTPs). Automatically recalculating and reevaluating priorities and threat assessments ensures you continue to stay focused on what is relevant to mitigate your organization’s risk.
Whatever risk management framework or process you use – ISCM, CARTA, or something else, gaining a complete picture of risk hinges on your ability to keep up with the real threats to your organization. Given today’s dynamic threat landscape, continuous threat assessment is the linchpin in gaining a comprehensive understanding of security risk. It complements internally-focused continuous monitoring and is vital for assessing risks in the digital world. With a repository that is always updated, as threats change over time it helps ensure you stay focused on what is truly happening in your environment, putting you in a position to be more proactive with risk mitigation and even anticipate potential risks.
NIST Proposes Ways for Organizations to Improve How to Identify, Recruit, Revelop, and Retain Cybersecurity Talent
The National Institute of Standards and Technology (NIST) has published a cybersecurity workforce framework (PDF) to support organizations' ability to develop and maintain an effective cybersecurity workforce. The framework defines roles; necessary knowledge, skills and abilities (KSAs) for those roles; and a common lexicon to clarify communication between cybersecurity educators, trainers/certifiers, employers, and employees. It is intended to help employers develop their existing workforce, and academic institutions prepare the future workforce in a consistent manner.
Like all frameworks, it will benefit some organizations who use it, and be ignored by others. One security leader who can see potential benefits is Martin Zinaich, information security officer with the City of Tampa. In 2015, he compared the current state of cybersecurity to the slow descent and ultimate crash of Eastern Air Lines Flight 401 in 1972 -- the crew simply had insufficient awareness of what was serious and what was not so serious.
In his paper, he wrote, "National Research Council in its report, 'Professionalizing the Nation's Cybersecurity Workforce Criteria for Decision-making' (2013) stated that cybersecurity is still too new a field in which to introduce professionalization standards for its practitioners."
"Yet here we are a mere 4 years later," he told SecurityWeek, "and NIST is actually proposing educational workforce standards. We're slowly getting there," he added.
The NIST framework defines seven primary security workforce categories: Securely Provision; Operate and Maintain; Oversee and Govern; Protect and Defend; Analyze; Collect and Operate; and Investigate. For some, this compartmentalism is a strength; for others, it is a potential concern.
Steve Durbin, managing director of the Information Security Forum (ISF), comments, "Although the size of the information security workforce in an organization is expected to increase by more than a quarter in the next two years (according to recent ISF research), in some organizations additional staff will not be affordable. The Framework," he believes, "may further help business leaders produce retraining and ambassadorial opportunities for existing staff, in information security and beyond which will go some way to plugging what is an ever-growing skills gap in an affordable manner."
Nathan Wenzler, chief security strategist at AsTech, is not so confident. He believes it might work in a "heavily structured and siloed environment, such as the Federal government. But," he told SecurityWeek, "for the vast majority of organizations which are already struggling to find qualified cyber security professionals, it may work against them as more and more people are brought up through this Framework and are only adept at a single specialty. Most organizations need much more flexibility from their security personnel."
Steven Lentz, CSO and director of information security at Samsung research America, has similar concerns. "The Cybersecurity Workforce Framework is a good idea, but in reality, will companies use or pay attention to it -- that is the real question," he said.
Lentz believes that its effectiveness will depend on its reception by the existing security training companies. "How will the current security training certification sites be affected -- such as ISC2, ISACA, SANs and others? Will they participate and help develop and guide the NIST initiative, or look at it as an alternative that may not go far enough -- or as a government alternative? We all need to keep up with training," he added, "but the training partners need to work together in order for us practitioners to become stronger."
There are other practitioners with even greater concerns. One is Chris Roberts, chief security architect at Acalvio. "I'm not a fan of certificates, of degrees and of any of the formal training," he told SecurityWeek. "I came out of a different era, not quite the novice/apprenticeship time, but not far after it. I learned on the job and was fortunate to have some amazing mentors and a thirst for knowledge. That path does not work for everyone. We need to accommodate that in a better manner than I see here. I do not subscribe to 'you can only be a professional if you have a degree'. That's bullshit logic that is broken by so many people in the world that it needs to be banned. I do subscribe to the fact we are all individuals and this industry has been good at accommodating that and understanding that many in this field don't subscribe to mainstream education."
He may be fighting a losing battle. As any system matures, control becomes centralized. Individual bank managers can no longer decide on loans -- the decision is controlled by the central office algorithm. Chain store managers can rarely choose what they stock -- again it is controlled centrally. Political control invariably moves to the center. The National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework may be another example of that centralization, currently in the form of guidance and assistance, but ultimately in the form of insistence. It will work for some, but not for others.
Steve Durbin has few doubts. "Some might say the Framework is too simplistic or too little too late but faced with the levels of shortage that many are predicting, this will at least provide organizations with guidance through what can be a very daunting process to attract and retain the right level of cyber skills."
A vulnerability found in the Public Access to Court Electronic Records (PACER) system operated by the Administrative Office of the U.S. Courts could have been exploited by hackers to access legal documents through the accounts of legitimate users.
PACER is an online public access service that allows users to upload and download case and docket information from federal appellate, district and bankruptcy courts. PACER charges $0.10 per page and users are billed every quarter.
The Free Law Project discovered that the system was affected by a cross-site request forgery (CSRF) vulnerability that could have been leveraged to download content from PACER without getting billed for it.
CSRF vulnerabilities are highly common, but that does not make them any less dangerous. The lack of CSRF protection on a website allows other pages opened in the same web browser to interact with the unprotected site.
In the case of PACER, a hacker could have obtained docket reports and other documents at no cost by getting a legitimate user to visit a malicious website while being logged in to the court system. The legitimate user would get billed for the files downloaded by the attacker.
“For users of PACER, unpaid fees can result in damage to their credit, and debt collectors sent to their door at the behest of the AO. They would never know why their PACER bill skyrocketed,” the organization said in a blog post. “For the Administrative Office of the courts, this vulnerability could create chaos in their billing department, and could badly damage the reputation of the organization.”
Free Law Project also believes attackers may have been able to exploit the flaw to upload documents on behalf of lawyers via PACER’s Case Management/Electronic Case Files (CM/ECF) system, but the Administrative Office of the U.S. Courts claimed it was not possible.
“The PACER/ECF system has an annual revenue of around $150M/year, and has around 1.6M registered users. At this scale, this type of vulnerability is extremely troubling,” Free Law Project said. ”Cross site request forgeries are not novel and do not require sophisticated hackers or researchers to discover. We identified this problem while gathering data from PACER, not while attempting to hack it or to research vulnerabilities.”
Free Law Project initially said it was “quite possible” the vulnerability had been exploited in the wild, but in a blog post published on Wednesday it clarified that it has no knowledge of the flaw being exploited. A proof-of-concept (PoC) exploit is available on the organization’s website.
The vulnerability was discovered and reported in mid-February and it was patched by all jurisdictions earlier this month.
Related: CSRF Flaw Allowed Attackers to Hijack GoDaddy Domains
Related: XZERES Fixes CSRF Vulnerability in Small Wind Turbine
GoDaddy has the best password policy among consumer websites; Netflix, Pandora, Spotify and Uber have the worst. This is the finding of a new study into the password practices that different companies encourage or force onto their users.
Dashlane, developer of the Dashlane password manager app that can synchronize passwords across all platforms, has published the findings of its 2017 Password Power Rankings study. It used five researchers to examine the password security criteria of 37 popular consumer sites, and 11 popular enterprise sites. Each site was given one point for each of five good practice criteria.
The criteria tested were password length (that is, at least 8 characters); a required mix of alpha and numeric characters; a password strength assessment tool (such as a color-coded or measurement bar); brute-force challenge or account locking (after ten false logins); and an MFA option. Three points out of the maximum five are considerate to be 'adequate' for the minimum threshold for good password security.
Dashlane accepts that password choice is the responsibility of end users, but believes that the service websites also have a responsibility to help the user. "It's our job as users to be especially vigilant about our cybersecurity, and that starts with having strong and unique passwords for every account," said Dashlane CEO Emmanuel Schalit. "However, companies are responsible for their users, and should guide them toward better password practices."
Of the 37 consumer sites examined, only GoDaddy received a 5/5 score. A further 19 sites are deemed adequate, with either 3 or 4 out of 5. At the top end, this includes many of the sites that could be expected to do well: Apple, Microsoft, PayPal and Skype. Only just adequate includes Facebook, Google, Reddit, Slack, Snapchat, WordPress and Yahoo.
More worrying, however, are those that failed. Amazon, eBay, and Twitter were among those scoring just two points. Dropbox, Evernote and Pinterest scored only one point; and of course, Netflix scored zero.
There is a similar divergence of scores among the enterprise websites. Only Stripe and QuickBooks got top marks, with Basecamp and Salesforce gaining a credible four points. GitHub, MailChimp and SendGrid are 'adequate' with three points. DocuSign and MongoDB (mLab) scored a disappointing two points; while, worryingly, Amazon Web Services and Freshbooks scored only one point.
It should be stressed that this survey relates only to the way in which the service provider helps the user in password choice and use -- it says nothing about the overall security posture of the website itself (for example, whether behavioral access controls are implemented internally and operated passively). Nevertheless, user credentials are frequently involved in data breaches, and service providers should do everything possible to strengthen their defense.
Dashlane noted a few very worrying specifics. Its researchers were able to create passwords using nothing but the lower-case letter 'a' on sites that include Amazon, Dropbox, Google, Instagram, LinkedIn, Netflix, Spotify, Uber, and Venmo. Netflix and Spotify actually accepted 'aaaa' passwords. The concern here is that if such simple passwords are acceptable, many users will choose a similarly simple -- and common -- password.
Earlier this year, an analysis of 10 million passwords revealed that the 25 most popular passwords are used to secure over 50% of accounts. Dashlane's recommendation to online service providers in such cases is basically fourfold. Firstly, passwords should have a minimum length of eight characters. Secondly, they should be required to be a case-sensitive mix of upper and lowercase alpha and numeric characters. Thirdly, the service provider should ban the most popular passwords. And finally, in case an attacker is working through a list of common passwords, an automatic account lock should be applied after a pre-defined number of failed accounts.
While such practices from the service providers will help the user, every web user must remember that that it is his or her responsibility to choose a strong and unique password for each different account.
Hackers broke into the database of Kenya's electoral commission and manipulated the results of the election, the leader of the country's opposition coalition alleged on Wednesday.
Vote counting is ongoing in east Africa's strongest democracy after Tuesday's election where voters were asked to either re-elect President Uhuru Kenyatta or replace him with longtime opposition leader Raila Odinga.
Odinga claims hackers used the credential of a murdered employee of the electoral commission (IEBC) to hack into an electronic voting system and activate an algorithm that inflated Kenyatta's votes.
"These results are fake, it is a sham. They cannot be credible," Odinga told reporters at a morning press conference.
"This is an attack on our democracy. The 2017 general election was a fraud."
He later released what he claimed was a log from an IEBC server to support his allegations that the server was configured to increase Kenyatta's totals by 11 percent and cover up the modifications.
The log, and Odinga's allegations, have not been independently verified.
With ballots from 92 percent of polling stations counted, electoral commission (IEBC) results showed Kenyatta leading, with 54.4 percent of the nearly 13 million ballots tallied, against Odinga's 44.7 percent, a difference of 1.3 million votes.
But Odinga believes the vote is actually in his favour, and tweeted that a count of ballots by his National Super Alliance (NASA) coalition showed him in the lead.
He said the hacking affected all the results, both the presidential and the general election.
The hackers were able to access the system using the credentials of Chris Msando, a top IT official at the IEBC found tortured and murdered in late July, Odinga said.
He would not say how he got the information, saying he wanted to protect his source.
The 72-year-old is making his fourth bid for the presidency, and has previously accused his rivals of stealing victory from him through rigging in 2007 and in 2013.
In 2007, the disputed vote resulted in two months of ethnically driven political violence that killed 1,100 people and displaced 600,000, a major blow to a nation seen as a regional bastion of stability.
The contested election in 2013 was taken to the courts and ended largely peacefully, though Odinga lost.
Odinga urged his supporters to "remain calm as we look deep into this matter," but added: "I don't control the people."