The Right Way to do Attack Surface Mapping
The key to mapping out your attack surface accurately is to scan all of your organization's assets, develop an asset inventory list and find shadow IT.
We often hear about the “wrong way” to map your attack surface, and that’s important, we need to know what should be avoided. However, what about how to do it the right way? Honestly, that’s easier said than done. Mapping your attack surface is a massive undertaking with many moving pieces and it can be an overwhelming process. Let’s dig into some concrete strategies to get started mapping out your attack surface correctly.
The answer: Start with everything
There are many costly or broken ways to perform attack surface mapping, such as crawling and passive DNS, (which unfortunately are used in a wide variety of different commercial products, incidentally). Step one is to start with everything.
”Everything” sort of sounds like a joke, but it’s not. To get the maximum resolution of information, we need to start with every IP address, every hostname, every port, every website URL, every whois record, every ASN name, etc. Everything!
An enormous undertaking
Talk about an enormous undertaking – and that is why it is rarely done. It is difficult and expensive to collect the data, it's difficult to parse the data, it's difficult to correlate the data, and it's difficult to present the data in a useful way. Many companies attempt to complete the audit themselves but shortcut the “getting everything” part. Instead, many organizations gather all of the assets they appear to own, but miss important assets needed to map their attack surface. Next, the metadata must be logged and compared across the entirety of the internet. Correlating the data is costly and time-consuming to perform on a one-off basis; however, if it is completed for every asset on the internet the resulting data is quickly queried. Further, by utilizing comprehensive data analytics tools it’s easy to add and remove false-negatives (missing a company that was purchased yesterday) and false-positives (you sold the company yesterday).
What are the advantages?
The two advantages of this setup are time and accuracy. To query a system constantly performing analysis across the entire internet typically takes a few seconds or minutes. In comparison, systems that run ad-hoc tests tend to be extremely slow and can take weeks or months.
Additionally, correlating data ahead of time gives you more accurate data. Rather than correlate a small slice of seed data typically found within asset inventory architectural designs, you get to correlate all of your data. Taking a holistic approach to correlating data makes it easier to whittle it down to the things you do own.
Finding shadow IT
Next, to more accurately bring your attack surface into focus, you’ll need to take stock of shadow IT lurking in your environment. There are many ways you can find shadow IT. One way is to just look under someone’s desk. I have seen security experts running large-scale network analyses to identify MAC addresses of machines that should not be there. They also might use network security protocol 802.1x to identify which assets cannot connect for lacking sufficient certificates.
Although, if you want to find assets at scale that cross the boundary of your controlled LAN and the wild west of what is on the public internet, an Easter egg hunt under people’s desks is simply not possible. Nor is searching in places you usually look. These are the “unknown unknowns” that people talk about – if you’re always searching in the same places, you’ll find the same insights and, likewise, you’ll miss the same assets. Worse yet, more and more assets are outside of the corporate LAN and into cloud-based SaaS increasing the danger of shadow IT.
Taking this manual approach to finding shadow IT is difficult, extremely time-consuming and error-prone. Ultimately, because of human-error and technical limitations of pivoting, you’ll miss a lot of critical issues. Issues like marketing teams with their own budgets, dev teams, and infrastructure are prime reasons why pivoting is of limited utility.
The better alternative is to create a list of all potential shadow IT assets and use it to narrow in on things that might be correlated by applying metadata to the asset and comparing that metadata. The simplest example would be looking for two domains with the same name but different top-level domains (for instance, “example.com” and “example.net”). But, that is only possible if you know every domain. Metadata is not uniform, though; some machines will have ports and others will not. Some machines will have websites and some will not. You get the drift.
Real-time attack surface mapping
Now, when two or more things start to look the same, it’s possible to link them together. However, you cannot build up a list of assets from scratch. You must start with everything and narrow it down from that comprehensive list to find what correlates. That is part of why asset management is so challenging if you want to do it well. It’s impossible to reliably do it yourself unless you know every piece of metadata for every asset everywhere and can build correlations based on that knowledge.
You need an up-to-date asset inventory list that can be queried in real time. The asset list should be based primarily on domain name system (DNS) and secondarily on IP/ASN/brand/etc. You cannot find shadow IT by doing real-time analysis unless there are already other linkages that point toward that domain. You need to find and use any form of metadata possible to build up a massive data lake to build those correlations.
Approaches like this are an expensive proposition. It is also the main reason traditional open-source tools such as OSINT (open-source intelligence) application/domain discovery are rarely as thorough as a more comprehensive and costly method when discovery is performed on large enterprises. Small companies may have better luck, sure, but small companies can probably inventory their assets thoroughly in 100 ways. When the company starts growing, or when it is not your company but a vendor/partner/customer — you realize those tools simply are not the right answer if being thorough is important.
A holistic asset map cannot be cobbled together on a shoestring. It takes a company like Tenable to drive down the cost on a per-customer basis. We strive to make our technology invisible and seamless for our users. Accurate attack surface mapping comes down to reducing hidden costs and improving time to value.
Visit the Tenable.asm product page to learn more about attack surface management.
Related Articles
- Asset Management
- Attack Surface Management