Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Cybersecurity Snapshot: Cyber Pros Emerge as Bold AI Adopters, While AI Changes Data Security Game, CSA Reports Say



Cybersecurity Snapshot: Cyber Pros Emerge as Bold AI Adopters, While AI Changes Data Security Game, CSA Reports Say

Formerly “AI shy” cyber pros have done a 180 and become AI power users, as AI forces data security changes, the CSA says. Plus, PwC predicts orgs will get serious about responsible AI usage in 2026, while the NCSC states that, no, prompt injection isn’t the new SQL injection. And much more!

Key takeaways

  1. Cyber pros have pivoted to AI: Formerly AI-reluctant, cybersecurity teams have rapidly become enthusiastic power users, with over 90% of surveyed professionals now testing or planning to use AI to combat cyber threats.
  2. Data security requires an AI overhaul: The Cloud Security Alliance warns that traditional data security pillars require a "refresh" to address unique AI risks such as prompt injection, model inversion, and multi-modal data leakage.
  3. Prompt injection isn't a quick fix: Unlike SQL injection, which can be solved with secure coding, prompt injection exploits the fundamental "confusability" of LLMs and requires ongoing risk management rather than a simple patch.

Here are five things you need to know for the week ending December 19.

1 - CSA-Google study: Cyber teams heart AI security tools

Who woulda thunk it?

Once seen as artificial intelligence (AI) laggards, cybersecurity teams have become their organizations’ most enthusiastic AI users.

That’s one of the key findings from “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, published this week.

“AI in security has reached an inflection point. After years of being cautious followers, security teams are now among the earliest adopters of AI, demonstrating both curiosity and confidence,” the report reads. 

Specifically, more than 90% of respondents are assessing how AI can enhance detection, investigation, or response processes by either already testing AI security capabilities (48%), or planning to do so within the next year (44%). 

“This proactive posture not only improves defensive capabilities but also reshapes the role of security — from a function that reacts to new technologies, to one that helps lead and shape how they are safely deployed,” the report adds.
 

Chart from “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud

(Source: “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, December 2025)

Here are more findings from the report, which is based on a global survey of 300 IT and security professionals:

  • Governance maturity begets AI readiness and innovation: Organizations with comprehensive policies are nearly twice as likely to adopt agentic AI (46%) than those with partial guidelines or in-development policies.
  • The "Big Four" dominate: The AI landscape is consolidated around a few major players: OpenAI’s GPT (70%), Google’s Gemini (48%), Anthropic’s Claude (29%) and Meta’s LLaMa (20%).
  • There’s a confidence gap: While 70% of executives say they are aware of AI security implications, 73% remain neutral or lack confidence in their organization's ability to execute a security strategy.
  • Organizations’ AI security priorities are misplaced: Respondents cite data exposure (52%) as their top concern, often overlooking AI-specific threats like model integrity (12%) and data poisoning (10%).

“This year’s survey confirms that organizations are shifting from experimentation to meaningful operational use. What’s most notable throughout this process is the heightened awareness that now accompanies the pace of [AI] deployment,” Hillary Baron, the CSA’s Senior Technical Research Director, said in a statement. 

Recommendations from the report include:

  • Expand your AI governance using AI-specific industry frameworks, and complement these efforts with independent assessments and advisory services.
  • Boost your AI cybersecurity skills through training, upskilling, and cross-team collaboration.
  • Adopt secure-by-design principles when developing AI systems.
  • Track key AI metrics for things such as AI incidents, training completion rates, AI systems under governance, and AI projects reviewed for risk and threats.

“Strong governance is how you create stability in the face of rapid change. It’s how you ensure AI accelerates the business rather than putting it at risk,” reads a CSA blog.

For more information about using AI for cybersecurity:

2 - CSA: You need new data security controls in AI environments

Do the classic pillars of data security – confidentiality, integrity and availability – still hold up in the age of generative AI? According to a new white paper from the Cloud Security Alliance (CSA), they remain essential, but they require a significant overhaul to survive the unique pressures of modern AI.

The paper, titled “Data Security within AI Environments,” maps existing security controls to the AI data lifecycle and identifies critical gaps where current safeguards fall short. It argues that the rise of agentic AI and multi-modal systems creates attack vectors that traditional perimeter security simply cannot address.
 

Cover page of CSA report  “Data Security within AI Environments”


Here are a few key takeaways and recommendations from the report:

  • New controls proposed: The CSA suggests adding four new controls to its AI Controls Matrix (AICM) to specifically address prompt injection defense; model inversion and membership inference protection; federated learning governance; and shadow AI detection.
  • Multi-modal risks: Systems that process text, images and audio simultaneously introduce "unprecedented cross-modal data leakage risks," where information from one modality can inadvertently expose sensitive data from another. The CSA suggests enforcing clear standards and isolation controls to prevent such cross-modal leaks.
  • Third-party guardrails: As regulatory scrutiny increases, organizations must adopt enforceable policies, such as data tagging and contractual safeguards, to ensure proprietary client data is not used to train third-party models.
  • Dynamic defense: Because AI threats evolve rapidly, static measures are insufficient. The report recommends establishing a peer review cycle every 6 to 12 months to reassess safeguards.

"The foundational principles of data security—confidentiality, integrity, and availability—remain essential, but they must be applied differently in modern AI systems," reads the report.

For more information about securing data in AI systems:

3 - PwC: Responsible AI will gain traction in 2026

Is your organization still treating responsible AI usage as a compliance checkbox, or are you leveraging it to drive growth? 

A new prediction from PwC suggests that 2026 will be the year companies finally stop just talking about responsible AI and start making it work for their bottom line.

In its “2026 AI Business Predictions,” PwC forecasts that responsible AI is moving "from talk to traction." This shift is being driven not just by regulatory pressure, but by the realization that governance delivers tangible business value. In fact, almost 60% of executives in PwC's “2025 Responsible AI Survey” reported that their investments in this area are already boosting return on investment (ROI).
 

Chart from PwC report “2026 AI Business Predictions”

To capitalize on this trend, PwC advises organizations to stop treating AI governance as a siloed function, and to instead take steps including:

  • Integrate early: Bring IT, risk and AI specialists together from the start of the project lifecycle.
  • Automate oversight: Explore new technical capabilities that can operationalize testing and monitoring.
  • Add assurance: For high-risk or high-value systems, independent assessments may be critical for managing performance and risk.

“2026 could be the year when companies overcome this challenge and roll out repeatable, rigorous responsible AI practices,” the report states.

For more information about secure and responsible AI use, check out these Tenable resources:

4 - Report: Ransomware victims paid $2.1B from 2022 to 2024

If you thought ransomware activity felt explosive in recent years, the U.S. Treasury Department has the receipts to prove you right. 

Ransomware skyrocketed between 2022 and 2024, a three-year period in which incidents and ransom payments grew exponentially compared with the previous nine years.

The finding comes from the U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024.” 

Between January 2022 and December 2024, FinCEN received almost 7,400 reports tied to almost 4,200 ransomware incidents totaling more than $2.1 billion in ransomware payments.

By contrast, during the previous nine-year period – 2013 through 2021 – FinCEN received 3,075 reports totaling approximately $2.4 billion in ransomware payments. 

The report is based on Bank Secrecy Act (BSA) data submitted by financial institutions to FinCEN, which is part of the U.S. Treasury Department.

Chart from U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024”

(Source: U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024,” December 2025)

Here are a few key findings from the report:

  • Record-breaking 2023: Ransomware incidents and payments peaked in 2023, with 1,512 incidents and about $1.1 billion, a dollar amount increase of 77% from 2022.
  • Slight dip, still high: While 2024 saw a slight decrease to 1,476 incidents and $734 million in payments, it remained the third-highest yearly total on record.
  • Median payment amounts: The median amount of a ransom payment was about $124,000 in 2022; $175,000 in 2023; and $155,250 in 2024.
  • Most targeted sectors: The financial services, manufacturing and healthcare industries reported the highest number of incidents and payment amounts.
  • Top variants: FinCEN identified 267 unique ransomware variants, with Akira, ALPHV/BlackCat, LockBit, Phobos and Black Basta being the most frequently reported.
  • Crypto choice: Bitcoin remains the primary payment method, accounting for 97% of reported transactions, followed distantly by Monero (XMR).

How can organizations better align their financial compliance and cybersecurity operations to combat ransomware? The report emphasizes the importance of integrating financial intelligence with technical defense mechanisms. 

FinCEN recommends the following actions for organizations:

  • Leverage threat data: Incorporate indicators of compromise (IOCs) from threat data sources into intrusion detection and security alert systems to enable active blocking or reporting.
  • Engage law enforcement: Contact federal agencies immediately regarding activity and consult the U.S. Office of Foreign Assets Control (OFAC) to check for sanction nexuses.
  • Enhance reporting: When reporting suspicious activity to FinCEN, include specific IOCs such as file hashes, domains and convertible virtual currency (CVC) addresses.
  • Update compliance programs: Review anti-money laundering (AML) programs to incorporate red flag indicators associated with ransomware payments.

For more information about current ransomware trends:

5 - NCSC: Don’t conflate SQL injection and prompt injection

SQL injection and prompt injection aren’t interchangeable terms, the U.K.’s cybersecurity agency wants you to know.

In the blog post “Prompt injection is not SQL injection (it may be worse),” the National Cyber Security Centre unpacks the key differences between these two types of cyber attacks, saying that knowing the differences is critical.

“On the face of it, prompt injection can initially feel similar to that well known class of application vulnerability, SQL injection. However, there are crucial differences that if not considered can severely undermine mitigations,” the blog reads.

While both issues involve an attacker mixing malicious "data" with system "instructions," the fundamental architecture of large language models (LLMs) makes prompt injection significantly harder to fix.
 

UK NCSC logo


The reason is that SQL databases operate on rigid logic where data and commands can be clearly separated via, for example, parameterization. Meanwhile, LLMs operate probabilistically, predicting the "next token" without inherently understanding the difference between a user's input and a developer's instruction.

“Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt,” the blog reads.

So how can you mitigate the prompt injection risk? Here are some of the NCSC’s recommendations:

  • Developer and organization awareness: Since prompt injection is a relatively new and often misunderstood vulnerability, organizations must ensure developers receive specific training. Security teams should treat it as a residual risk that requires ongoing management through design and operation, rather than relying on a single product to fix it.
  • Secure design: Because LLMs are “inherently confusable,” designers should implement deterministic, non-LLM safeguards to constrain system actions. A key principle is to limit the LLM's privileges to match the trust level of the user providing the input.
  • Make it harder: While no technique can stop prompt injection entirely, methods such as marking data sections or using XML tags can reduce the likelihood of success. The NCSC warns against relying on “deny-listing” specific phrases, as attackers can easily rephrase inputs to bypass filters.
  • Monitor: Organizations should log LLM inputs, outputs and API calls to detect suspicious activity. Monitoring for failed tool calls can help identify attackers who are honing their techniques against the system.

For more information about AI prompt injection attacks:


Cybersecurity news you can use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.

× Contact our sales team