Generative AI is being used for cyber-attacks, Deep Instinct says


A new study from US-based cybersecurity company Deep Instinct has found that generative AI is causing a growing number of cyber-attacks

A new study from US-based cybersecurity company Deep Instinct has found that generative AI is causing a growing number of cyber-attacks. Deep Instinct, a cybersecurity company focused on proactive prevention against unknown malware, has released its fourth edition of the Voice of SecOps Report.

The research, titled "Generative AI and Cybersecurity: Bright Future or Business Battleground?" was carried out by Sapio Research and involved a survey of over 650 senior security operations professionals, including CISOs and CIOs, in the United States. The report examines the influence of generative AI in the cybersecurity sector, analysing its positive and negative impacts on organisational security readiness. Notably, 75% of security professionals saw an upsurge in attacks over the past year, with a significant 85% attributing this escalation to malicious actors utilising generative AI technology.

The study reveals that 69% of respondents have already integrated generative AI tools within their organisations, and the finance sector leads in adoption, with 80% utilisation. Additionally, the study highlights that 70% of security professionals see generative AI as a contributor to enhanced employee productivity and collaboration, with 63% recognising its positive influence on employee morale. Despite the positive reception, senior security experts view generative AI as a disruptive force in cybersecurity.

Almost half (46%) of the respondents believe that generative AI could heighten their organization's susceptibility to attacks. Major concerns associated with generative AI include privacy apprehensions (39%), imperceptible phishing attacks (37%), and an increase in both the frequency and speed of attacks (33%). The study also points out instances of generative AI technology being repurposed by malicious entities.

An example is WormGPT, a new generative AI tool promoted on underground platforms as a means for bad actors to orchestrate sophisticated phishing and business email compromise attacks. Apart from concerns linked to generative AI, ransomware remains a persistent threat to organisations. Approximately 46% of the respondents identified ransomware as the most substantial risk to their data security.

Furthermore, 62% acknowledged ransomware as the primary concern for C-suite executives, marking an increase from 44% in the previous year. A change in strategy The pressure imposed by ransomware threats has prompted organisations to adjust their data security approaches. As a result, 47% of respondents have now established policies to pay ransoms, compared to 34% in the prior year.

Consequently, 42% of those surveyed admitted to paying ransoms for data retrieval over the past year, up from 32% in the previous year. However, the percentage of those relying on ransomware insurance to make payments has decreased from 62% in 2022 to 43% in 2023. The study highlights that contemporary cybersecurity teams are facing higher workloads due to the integration of new technologies such as generative AI.

Consequently, more than half (55%) of security professionals reported elevated stress levels, primarily attributed to staff and resource limitations (42%). Furthermore, 51% of these professionals are inclined to leave their positions within the next year due to stress-related factors. .


Aug 24, 2023 13:29
Original link