Webinaire à la demande
Atténuation des risques de sécurité liés à l'IA : informations et stratégies avec Tenable AI Aware
- Exposure Management
- Risk-based Vulnerability Management
- Tenable Vulnerability Management
Rejoignez-nous pour discuter de nos dernières recherches sur les risques de sécurité liés à l'IA et découvrir comment Tenable AI Aware vous permet de garder une longueur d'avance sur ces menaces émergentes.
Une utilisation de l'IA trop intense et potentiellement non autorisée crée de nouveaux problèmes de sécurité. Lors du webinaire « Atténuation des risques de sécurité liés à l'IA : informations et stratégies avec Tenable AI Aware » (tenu en anglais), vous obtiendrez des informations sur des exemples concrets d'utilisation de l'IA non autorisée et sur les façons dont certains développements d'IA inattendus pourraient compromettre votre posture de sécurité. Vous découvrirez aussi comment traiter ce risque avec Tenable AI Aware. Voici les sujets qui seront traités au cours de ce webinaire :
- Comment les capacités de détection développées par Tenable Research permettent l'identification d'une utilisation de l'IA non autorisée, de faire émerger des vulnérabilités liées à l'IA et de mettre en lumière des développements d'IA inattendus
- Une démo détaillée de Tenable AI Aware
- Comment les capacités de Tenable AI Aware améliorent votre compréhension et votre contrôle globaux sur les cyber-expositions qui posent le plus grand risque pour votre entreprise
Click here to review the webinar transcript
Webinar Transcript: Mitigating AI-Related Security Risks
Brittany Isaacs
Hello, everyone, and welcome to Mitigating AI-Related Security Risks, Insights, and Strategies with Tenable AI Aware.
Today, Luke and James are here to discuss AI-related risks and our new capability, which is Tenable AI Aware.
We'll wait another 30 seconds to get started, and then we'll proceed. Thank you all for joining.
Luke Tamagna-Darr
I’ll go ahead and get started. Well, hi, everyone. As Britney said, today, we're going to talk about mitigating AI-related security risks and the insights and strategies with Tenable AI, a new capability we've launched.
Just quick introductions. My name is Luke Tamayadar. I'm a senior director of engineering in our research organization and heavily focused on our sensor content and how we drive new product capabilities from that. James, you wanna do a quick introduction?
Thanks.
Alright, so real quick. You know why AI matters in today's cyber landscape. So, AI is not a novel concept. Organizations have been developing capabilities with machine learning, natural language processing, and similar capabilities for years.
You know, Tenable has had VPR for, I think, about 5 years. We have our asset exposure, score, and asset criticality rating. There are even some machine learning capabilities that help drive OS fingerprinting for Nessus.
These are all different levels of artificial intelligence. One sec…
But it's quickly become a transformative technology. Right now, over 95% of businesses are expected to adopt some form of AI by 2025, and 80% of enterprises already have developed or are developing AI projects. It brings a lot of opportunities for innovation and efficiency, but it also introduces some new risks to security.
Some of those risks are new, some of them are just different iterations of risks that have already existed.
One thing that's really important to recognize. AI doesn't operate in isolation. It's running on the same systems and the same infrastructure that is managing your critical assets, your sensitive data, which means the attack surface that you already have is simply expanding. Global AI market is growing rapidly. It's projected to reach 1.59 trillion by 2030.
In spite of this growth, many organizations—around 60%—struggle with securing their AI models or infrastructure.
In fact, 41% of security professionals say AI tools are increasing the attack surface and creating new vulnerabilities.
So this is a really critical challenge, as evolving incredibly quickly and far faster than our ability to manage and secure with. And one of the first things that comes to that is simply knowing where you have AI. That's where AI Aware comes into play. It gives you visibility into what's on your environment and how you need to secure it.
So, a little bit on the evolution of the attack surface. As I said, most companies have had AI capabilities for years. A lot of these capabilities, like our VPR AES and ACR, they run really far in the background. They're on dedicated services that are running maybe once a day, or you know, a couple of times throughout the day. But there's no direct connection between those services and you know, human interaction.
Which puts a significant barrier between any sort of attackers. Most of the risks in those environments are data poisoning. So, someone managing to compromise the data that is feeding into those models and create results that are not what you'd expect.
But for an attacker to compromise that, they need to control the data source or have a complex attack path. That's going to link the other, typically multiple, disjointed services, you know, an infrastructure that may take a lot of insider knowledge. So it's a complicated attack path.
LLMs are really where things started to shift. We saw a lot of excitement as Chat GPT released their models. And you know you had the Chat GPT client because not only did they give this impression of intelligence, where you could ask it a question and it would tell you something that looked like it was intelligent.
It gives organizations the ability to start leveraging a lot more of their data and feeding that into these capabilities. But at the same time, that's bringing the attack surface much closer. You've got direct user interaction. So anyone who's looking to compromise your infrastructure no longer needs to figure out the path to those AI capabilities.
It's right there in front of them, with their prompts.
Prompt injection has become a major concern. There's still a lot of uncertainty around the overall reliability of the attacks. We've seen evidence that prompts can be compromised. They can be tricked into either hallucinating or, you know, depending on the deployment, executing code.
But because LLMs are not generally deterministic in their response, they are susceptible to hallucinations.
There is still a lot of fuzziness. It's not like a SQL injection in a web app where you send a payload, and you get, you know, the database details back, and you know you've succeeded. With an LLM prompt, you might say, ‘Tell me the customer ID for this customer,’ and it will tell you something, knowing that that's actually compromised. The prompt is a whole other level of complexity, and knowing you can do it predictably. So, prompt injection is a major concern. But it's not clear yet how serious, how predictable, how reliable it is!
What is better known now is the underlying infrastructure, which is a similar concern we have across all apps. That infrastructure is now much more directly ‘compromisable’ by attackers because of that user interaction through the prompt, which could allow an attacker to compromise the language libraries that are underneath the application you've developed. It may compromise the services that are running it and lead to further attacks.
Even third-party application usage of LLMs is presenting a much more significant data risk.
You know. So, most machine learning. And LLM, NLP, that's using internal data. It's things you're generating and trying to create a capability out of them, because these third-party applications are pulling in anything you feed into them. Depending on how that 3rd party application is built, there's a much more significant risk of data compromise simply by employees uploading sensitive information… Especially if that is being uploaded to an application that's using a public model that doesn't have any contractual sort of separation of their application data from the global data.
Let's conduct a quick poll on how confident you are in your visibility into AI usage.
Alright, so you know, results are about what we've seen, you know, from talking with customers.
A lot of you out there have, you know, about 50%. Have some confidence in your AI usage, and 40% have no confidence at all. And that's it's really telling. And it's not surprising. AI is, it's not always obvious where it is. And having that visibility, not knowing how much you have that can be a significant risk.
So, do we have full visibility into AI usage? You know, AI is much more pervasive than you realize. A lot of the concern. A lot of the sort of hype and concern and noise is around sort of that direct interaction with public LLMs. That employees may be putting sensitive information into. There's a lot of fear around both the prompt injection, but also just data leakage.
Many applications and browser extensions these days leverage LLM capabilities. It's really exploded over the years.
And it's not always obvious what language models those applications are using. Is it a public model? Is it, you know, a public model that is paid for? And you have data protections? Or is it just using Chat GPT, you know, and feeding the data into the global public data set?
Further complicating this, I was doing some research, and if you just search in Firefox for the term AI, you have 2,000 browser extensions. I'm sure there's some mix of, you know, different things that are using AI. But it's not AI. Searching for LLM is returning 300 browser extensions, and I'm sure that's growing on a daily basis, as everyone builds their own extension.
Some of the applications that leverage AI and LLMs are probably primarily focused on time to market. It's unfortunately a reality anytime there's some new technology that comes to market. There's a rush to get your product out there and use it.
And for a lot of these organizations and app developers, security and privacy are unfortunately a secondary concern.
So, it is not always obvious or easy to find out if this app is developed in a secure, privacy-centric mode.
Even for teams that are developing in-house applications. The risk of developing with weaker, vulnerable libraries can lead to overall compromise. There has not been a ton of attention on these libraries until recently, but there has been increased attention from the industry, from companies starting to leverage them. And these are libraries like, you know Scikit, Numpy, Pandas… They've had heavy usage, but now there's gonna be a much higher degree of research on compromising these, leveraging them for attacks.
And that's also gonna get the increased attention from attackers.
Also, because of the pervasive usage of LLMs and the underlying libraries, the attack surface and value opportunity for attackers have increased because there's a lot more deployment out there.
So these are a lot of the risks, and with not having that visibility.
Challenges of detecting it. It's not straightforward. It's embedded as part of an application, whether it's a browser plugin or a chat service you're using, or a hosted chat service.
And the potential for unsanctioned AI projects is high. Still, you know, to some degree it's loose and fuzzy, you know. LLM has all the attention. But there's machine learning, and all of that, so knowing which applications are doing it is not always obvious. And there's just a wide range of language libraries and capabilities that provide these.
Some of the biggest challenges… The rapid development. So AI tech has evolved rapidly, you know, Chat GPT just came out with their models a few years ago. And now we've got models, you know, Copilot Llama 3, Claude. There are hosted services. There are offline models you can use. Even each of these individual models. You're seeing new versions come out on a regular basis with increasing power and capabilities.
Lack of governance is a major challenge. With that rapid pace of development.
It takes time to review what's out there. What teams are using, what the best model is, what the best app is, which one is gonna work for your environment with your security and privacy risks.
And if these keep evolving as new ones come out. New versions come out. You've got to redo those evaluations over and over again.
And it, you know, one of the challenges, you know, teams have to make is, do you take a default of denial approach… You say, we can't use any AI until we've decided on the best ones that we will allow.
That's going to be more secure. But you're going to significantly slow down your pace of innovation at the same time.
If you take a default allow-all, you say, use what you want. We'll figure out which ones are risky as we go.
You can move quickly. But there are a lot of security risks associated with this. You have to find the right balance.
And on top of that, it’s that increasing complexity… It's multi-layered. It's not just the you know the prompt that you're interacting with. It's the prompt. It's the underlying infrastructure. It's the hosted service. If you're using a hosted model it's the model…If you're not using a hosted model, you're using your own self-hosted.
It's whatever's host is working on that model. And you're also looking at all of the data sources that may be feeding into that, you could be pulling in your Google drive and feeding all of that. And if you don't know which drives and what files are in there. You may have sensitive data going in.
So it's really critical to understand for each application you have what the impact is, how they're using the data, and just know what those applications are.
So, how are we addressing the AI risk?
What we've done to start out with is using Nessus, web application scanning, and NNM to develop plugins that are aimed specifically at identifying AI and LLM-enabled applications. So this is applications, browser extensions, and language libraries that are that are primarily leveraged for AI feature development. Two aims of this: is simply to enumerate all of those capabilities on an asset. So you have it front and center, and you can see on this asset what those capabilities are present, and then where we know about them, identifying critical vulnerabilities in those applications or libraries. The majority of these are in the libraries.
Or some of the hosted apps that you may download. We don't see as much in the browser extensions. I don't think they get as much attention as they may need to.
That helps you understand as you go through, especially in that infrastructure you're developing.
What are the vulnerabilities, and which ones do you have to address quickly?
The goal here is to help reduce the risk of AI-related breaches. If there is a breach, it's a significant data set that is being fed into the model. And if someone can compromise that, you can have some serious sensitive data that gets leaked out.
There's also risk. If the prompt is not properly secured…Organizational embarrassment. We've seen cases where attackers are able to manipulate the prompt or hijack the prompt and have it respond to customers with responses that put the company in a bad light and simply focus on embarrassment. And you've always got the traditional risks, attacks that are leading to code execution on the server. Once you have code execution, you can start to pivot either into that data set or onto other systems.
A lot of the vulnerabilities we're seeing are still targeting the underlying infrastructure. Like I said, prompt injection is going to be an interesting space. I think it will expand, and it will become more targeted.
But it's complicated. The underlying infrastructure is very well known. And if you can go after that and go after what you know, you're gonna have a higher degree of success with the plugins we've released.
And some of the data we're already seeing over 60% of customers have leveraged AI detection capabilities. We have identified over 9 million AI apps on customer systems and more than 90 million AI browser plugins detected.
And these browser plugins, from my experience, are the ones that really fly under the radar. Because most browser extensions are not something that's monitored as broadly. But it does have a significant impact as customers start to love or employees start to leverage those and feed data into them.
So with that, we'll do one more poll.
Alright, so yeah, this is, you know, this is what we've seen in previous surveys and discussions.
Heavy focus on the data leakage risks. Unauthorized AI solutions with some concerns about AI vulnerabilities and then prompt injection. And really, what we're focused on with this 1st capability is highlighting that usage, helping you understand? Who's using AI capabilities, what those capabilities are, and then establishing some governance policies that say, here's the tools you can use. Here are the models you're allowed to use. Here are the models you can't use. And if you see employees who are using those models or using tools that are authorized take some action and make sure people are using the right tools and capabilities, and that will hopefully reduce the risk of data leakage and unauthorized usage.
With that, I'll hand it over to James for a demo.
Sorry, I was muted. James, while you've got that is there a specific version of Security Center needed for the AI dashboards?
OK.
And then can you show real quick how you got to the AI aware dashboard for?
Sorry, you can identify those on the Plugin search page that James showed… You can look Local plugins will require credentials, remote and combined plugins can be done remotely, with combined being both.
Cody, Tenable I/O… Tenable Vulnerability Management is just a new name for Tenable I/O.
Is there a mobile app? No, we don't have a mobile app.
Oui.And as James said, if you are still having trouble, please reach out to support because they can help you get things up and running.
Yeah, we're exploring our options. It's looking for the best path forward. And there is, you know, further work in VM. To expose this throughout the product.
Cody asked, is there a specific scan that needs to be run, or a specific setting and scans that need to be enabled? Was setting the one setting in general settings all that's needed for the actual scans with Nessus and web application scanning?
If you're running scans with every plugin enabled, that should be all you need to do. There is a scan policy, a targeted scan policy for the AI plugins, which simply enables the Plugins in the AI Plugin family…
Again, you would for non-agent scans. You would need to set credentials and everything. But if you're running agents, or if you're running credentialed, NASA, scans with every plugin enabled, it should work out of the box.
Alright, any last questions?
C'est bien compris ?Well.
Merci.Everyone. I think that should. That concludes it.
James Davies
My name is James Davies. I'm a senior security engineer with Tenable, and I work in the Eastern US region.
Thanks, Luke, all right. So there are a couple of different things I want to talk about first. And if you have Security Center or Tenable VM, you're more than happy to follow along.
But I want to show you, one, how to enable this and kind of where to go to look for your AI results.
Alright. So First things first in Tenable VM you need to enable the output search. So if you're in Tenable Vulnerability Management and you have admin rights, you need to go into general, and you have to go into search.
This has to be enabled first in order to get this to work. This is part of our kind of how we're indexing plugins.
That's gonna change in a little bit while we do just plug an output. But for today. If you want to enable this, feel free to go ahead…
With that, to get started, after you've done your scans with the Nessus scanner and the Nessus agents. We can come over to ‘findings and vulnerabilities.’
You'll notice there's a button called AI inventory.
Once you select this, we are going to go ahead and filter out all of the noise, and show you exactly what the Plugin, the AI. So this is a roll-up Plugin. So all of our AI detections roll up to this one single AI/LLM software report.
You'll notice that we also tag. So, when we detect an AI technology being used, whether that's a browser extension, AI locally, web application… We're going to tag it with this AI/ LLM tools.
So you kind of get like a quick look at what AI is being used on that asset.
So that's just one way of identifying where AI is being used in the environment.
The other way is through our dashboard.
When you come into the Tenable VM dashboards, go over here to all dashboards or new from template.
We can go ahead and search for it. The best way to do so is by typing in LLM.
So now we can see we have an AI/LLM dashboard.
I already have this selected.
So if we come into here, we now have our AI/LLM dashboard and we can see we have quite a significant amount of detections via the Nessus and NNM.
One of the best things with, you know… As long as we have credential-based scans or agent-based scans. We're able to pick this up.
Also, if we're doing web application scans with either Nessus or web application scanner included in Tenable VM WAS, You're able to kind of see what is being picked up by WAS scanner.
NNM is interesting because if you have TVM (Tenable vulnerability management), NNM is included. This allows you to passively pick up AI AI/LLM usage by passively listening on the network.
So down here at this column, we can see that this specific network is using a heavy AI, so we can see Google Vertex, Open AI detection. And again, this is all being done passively, so there's no active scanning necessary with NNM you just have to tap a span port or port mirroring configured to look at that core switch and see all the traffic that's going on.
You also have a nice pie chart to kind of breaks down the different detections, the AI software known-by-Nessus and Nessus agent. And then what Luke mentioned is those browser plugins. So these are those hard to understand or hard to assess where AI is being used, so that AI data leakage, you know, via the browser plugins should kind of raise a red flag.
We had a customer... I had a customer a couple of weeks ago, once they had this, was kind of blindsided with how many browser plugins were in use in the environment.
They have an AI Usage policy where no browser plugins are allowed. So, one of these, they slipped through the cracks. Whether they were hardening with a policy management via, you know Google or any kind of extension management, they kind of slip through. So this was able to kind of quickly show what app assets in the environment were showing those browser plugins.
And then down here by web applications. So again, this allows you to kind of detect the use of AI in a web app.
This dashboard again, you know, you can also customize it if you want to kind of look for certain things you want to filter on certain assets.
Maybe you have assets that are allowed to use AI. Maybe they're not allowed to use AI. So you can come in here and quickly filter for just those assets or tags within Tenable VM.
Another place you can also utilize detections in our dashboards is if you're a Security Center customer. So, Security Center also has the AI/LLM findings dashboard.
Again. This supports both the Nessus scanner and the Nessus agent along with the web application scanner and NNM, if you're licensed for it in Security Center.
If there are any questions about this, feel free to throw them in the Q&A chat, and we'll be able to answer…
Q&A Section
Luke Tamagna-Darr
Sorry, I was muted. James, while you've got that is there a specific version of Security Center needed for the AI dashboards?
James Davies
No, they are not. They are included in all versions of Security Center.
Luke Tamagna-Darr
OK.
And then can you show real quick how you got to the AI aware dashboard for?
James Davies
Tout à fait.
So when you want to add it, come up here to dashboards, and we want to choose a new template from templates.
And we just wanna type in ‘new dashboard’ oops. Sorry.
I'm gonna go back to.
It's going to be within the dashboard templates. So new dashboard template library.
and LLM.
Then you'll be able to see it right here, and we can add that as a dashboard.
And then, once it's thrown in, you'll be able to see it right from here, and you'll be able to view this.
David. Yes, authentication is required to get the local AI detections. If it's a web-based we'll detect it with the web application.
Luke Tamagna-Darr
Sorry, you can identify those on the Plugin search page that James showed… You can look Local plugins will require credentials, remote and combined plugins can be done remotely, with combined being both.
Cody, Tenable I/O… Tenable Vulnerability Management is just a new name for Tenable I/O.
Is there a mobile app? No, we don't have a mobile app.
James Davies
Is a I don't let's see, is there?
Charles, if you go to Tenable.com slash plugins, and then if you're getting a 4 0 4 for this…not sure why, if you go to Nessus families, and then you go to artificial intelligence, you'll be able to see those plugins right here.
Josh, this is live now. What you want to do is make sure your plugins are up to date.
Make sure you're running authentication scans.
It's not retroactive, so you have to be continuously scanning. Luke, this has been live for about a month or two now?
Luke Tamagna-Darr
Just around BlackHat. So about 2 months. Oui.
Okay,
I think. James, correct me if I'm wrong. For VM, you do have to have the Plugin output search functionality enabled for those dashboards right?
James Davies
That's correct. So I think there's documentation, Doc Link, that I can find that we could search for..
If not, contact your customer success manager or open a ticket if you want specific instructions on how to do that.
So? Stuart asked. I can actually pull up SC, so to access this in SC there's a couple of different things. If you're not seeing this dashboard, make sure your feed is up to date within Security Center.
So we can click on, add dashboard. And if we get in a search for LLM we're able to pull that right up.
Luke Tamagna-Darr
Go ahead.
See? There's a question. Nessus agent will automatically get this info, since it's authenticated. Yes, so, Nessus agents, as long as you're running all of the plugins and you, you haven't disabled them…
Nessus agents will get any of the browser extensions local language library detections because it's privileged on the host. It will not get the remote detections.
James Davies
So there is a community post which I will pop in chat. This is how you can check to see if your Tenable products are up to date with plugins.
So, Josh, make sure you can see that if your plugins are having issues, please open up a support ticket.
Oui.
Luke Tamagna-Darr
Oui.And as James said, if you are still having trouble, please reach out to support because they can help you get things up and running.
James Davies
Can you obtain results by just by just by adding the LLM dashboard? Or do you have to toggle the switch in the settings?
So, you may see limited information if you do not enable the toggle within Tenable vulnerability management.
I believe, Luke, we are exploring options to not use the Plugin output as the final source for those dashboards. We'll be using dedicated Plugin detections for that.
See a dashboard update. Oh, sorry! Go ahead, Luke.
Luke Tamagna-Darr
Yeah, we're exploring our options. It's looking for the best path forward. And there is, you know, further work in VM. To expose this throughout the product.
James Davies
So, while Luke answers any other questions, I will send the document link for the Plugin. Search.
I did post the documentation link for the settings to enable the Plugin output search.
So, Stuart. This is automatically enabled in Security Center and Tenable SC. You do not need to enable anything. This is already done with Security Center.
Luke Tamagna-Darr
Cody asked, is there a specific scan that needs to be run, or a specific setting and scans that need to be enabled? Was setting the one setting in general settings all that's needed for the actual scans with Nessus and web application scanning?
If you're running scans with every plugin enabled, that should be all you need to do. There is a scan policy, a targeted scan policy for the AI plugins, which simply enables the Plugins in the AI Plugin family…
Again, you would for non-agent scans. You would need to set credentials and everything. But if you're running agents, or if you're running credentialed, NASA, scans with every plugin enabled, it should work out of the box.
James Davies
As Luke mentioned, when you create a new scan template, you'll see. Find AI right here. This will enable just the AI Llm. Plugins.
I did post the documentation link for the settings to enable the Plugin output search.
So, Stuart. This is automatically enabled in Security Center and Tenable SC. You do not need to enable anything. This is already done with Security Center.
Luke Tamagna-Darr
Alright, any last questions?
C'est bien compris ?Well.
Merci.Everyone. I think that should. That concludes it.
James Davies
Thank you all.
Intervenants

Lucas Tamagna-Darr
Senior Director of Engineering, Tenable

James Davies
Senior Security Engineer, Tenable