An Introduction to “Scan Everything”
A “scan everything” approach tests and triages every asset to understand your organization’s risk and how to reduce risk quickly and efficiently.
At Tenable, we often must walk clients and potential clients through their rational objections to the idea of adding everything to their inventory and then testing everything that they find. The concern is understandable – the “scan everything” approach is expensive, creates duplicate workloads, potentially increases false positives, and adds the risk of introducing accidental loads that might cause a denial of service. I don’t want to discount these real and rational objections; however, in this blog post, I will spend some time discussing why dispelling these objections is a good security practice.
Anyone familiar with my background, as well as that of Jeremiah Grossman, Tenable security strategist and Bit Discovery founder, knows we spent a lot of time in the web application security trenches. I have personally performed countless penetration tests. Each test led us to find a great deal of Internet oddities through trial and error. Let me guide you through a few such issues.
Disparities between machines
When you have two or more machines that are load-balanced (typically through round-robin DNS for the purposes of this example), there is always the possibility that one or more of them will have issues. Just think about it practically. Let’s say you must apply a patch manually for some reason. Are you going to apply them all at the exact same millisecond? Unlikely. Rather you’re going to have a slight disparity between the machines, even if briefly. Regardless if you do it in an automated fashion, a tiny load on one machine might delay a patch by a few seconds.
Now amplify that disparity by forgetting to install a patch on one machine or one machine being misconfigured. Or worse, one machine could be hacked. If you only look at one machine, you are leaving a massive gap in the reality of how your architecture is set up. Because you don’t make the distinction between these machines and instead group them together, you are very likely going to miss what we like to call “flappers” entirely, or, if you do find them, it will be by luck or take a long time for the scanner to find the IP through round-robin DNS to test it.
Overly de-duped infrastructure
We often see that customers are wary of scanning the same machine over and over for denial of service, so they spend a lot of time trying to de-duplicate their environment. Let’s take an example of these two fictional websites (dev.whatever.com and prod.whatever.com) where the HTML is a perfect match. Are they the same websites? Well, one could argue that the code is identical or at least will be identical whenever prod catches up with dev. But dev is almost always going to be out of sync with prod. Meanwhile, dev is not behind the cloud-based web application firewall for cost reasons, so it has minimal protection, next to no logging and hasn’t received a patch since dinosaurs roamed the earth.
But is the dev environment any less dangerous? If it’s behind a traditional firewall and has the same username and pass to the database (just a different named database (dev.db.int vs. prod.db.int), that won’t do much to stop an attacker who wants to pivot within a network. If you aren’t scanning your external presence thoroughly, you almost certainly aren’t doing a good job internally. Eliminating “duplicate” scans such as this example, you lose a lot of visibility, thereby obscuring what’s really happening in your environment.
Ignoring assets that “don’t matter”
Determining what matters is a very subjective issue. No one knows what matters for sure until it’s too late. You may have a pretty good inkling of the things referred to as “alpha assets,” assets that provide the bulk of your income. But, if you spend all your time focused on assets that you do know are important while overlooking your long-tail assets you will likely find the true value of those assets. For example, Equifax found out its dev site existed too late to patch Apache Struts. Sands Casino found out its dev site was a conduit to production. And, Verizon found out its router was publicly accessible after the router was released publicly to millions of customers. Simply put: Overlooking long-tail assets can increase your cyber risk.
Using application security scanners in lieu of asset management
Different scanners are good at different things. I wouldn’t use a port scanner to find web app issues and vice versa. Likewise, if you are only scanning a small percentage of your network, you are likely missing a huge number of assets when a zero-day comes out. Sure, that zero-day exploit may only exist in a small number of machines you are testing, but how many machines are you not testing that are suffering from the same exploit? How could you know unless you are looking at everything all the time? I would never claim that asset management is a substitute for a vulnerability scanner or penetration test, but likewise, I wouldn’t say that either of those is good at finding issues in assets that aren’t under contract.
“You cannot accurately quantify your management of corporate risk when you have no idea what that risk is.” — Jeremiah Grossman
So yes, it’s expensive to find, test, and triage all your assets, but doing so means you finally have some idea of what the risks are and how you can prioritize their risks to respond appropriately. The answer, therefore, is in finding less expensive ways to audit more, NOT to audit less. If you don’t test your environment thoroughly, you have no idea how secure your environment is. Said another way, you cannot accurately quantify your management of corporate risk when you have no idea what that risk is.
Visit the Tenable.asm product page to learn more about attack surface management.
Related Articles
- Asset Management
- Attack Surface Management