For many years, we are using vulnerability scanners to identify security weaknesses and flaws in our internet facing environment. A vulnerability scan is an automated process and critically important for organizations to see what vulnerabilities they have and attackers can use if they target them.
Despite all this success, widespread use and importance, vulnerability scan has some gaps to manage the organizations’ attack surface.
Gaps of Vulnerability Scan
Vulnerability scan process is an active scan so risky to use for sensitive assets. It can cause slowness and even interruption. Because of this, especially for systems that need 24×7 operation and are critical, it is a technical risk for the availability of the systems. This is the most important of the hurdles ahead for making the scanning process continuously. The other one is the technique behind the vulnerability scanning, since it is an active scanning process, it takes too long for a wide range of networks. This risk sometimes causes the scan not to be completed in the desired time, and the scan times are slipped and the assets can’t be scanned for a long time. A significant risk remains for vulnerabilities discovered in the time between two scans and for machines installed in the DMZ.
For scanning, a scope is determined and this scope is scanned periodically. This causes Shadow IT. Services that are out of scope for various reasons or hosted in third party institutions remains as unscanned. Shadow IT is a very big problem especially for big organizations.
Vulnerability scanning is a process that focuses only on identifying vulnerabilities. However, attackers may use several findings about the organization like misconfigurations, compromised accounts, third party connections, etc.
What does attack surface management do differently?
There are several tools already in internet that we can check information about out organization. Passive subdomain tools, port scanners, certificate and dns record checkers, etc. Some of them are free and some of them paid services.
These services is also checked by attackers as the discovery phase before an attack. So, it is very important to check the findings at frequent intervals from these tools. However, there are many tools like that and it seems impossible to check all of them everyday, about all keywords of the organizations – keyword means here every word that an attacker searches for in the discovery phase, including brand, application, web site, VIP names, and others.
Fundamentally, Attack Surface Management is a passive scanning process that using these free or paid tools already hosted in internet. This passive scan provides a continuous scanning process. With the keywords explained above, Attack Surface Management tool scans all these tools continuously and since it is a passive scan, there is no any risk about the availability of the systems.
Third Party Connections
With developing technology and methods, all companies need to work with many other companies and have a direct connection for this. The number of these links is increasing day by day. This situation also creates a great opportunity for attackers and in recent years attacks over third party connections increased and they mostly succeeds. This requires the risk management teams to closely monitor third-party risks.
Very understandably, organizations normally do not let others scan their environment. It is not possible to scan the organization to determine their vulnerabilities before starting a connection with them. It remains only to rely on the company’s own tests and some paper work that we will make the third-party connection. With Attack Surface Management, it can be very fast and risk free to identify their risks before creating the connection.
What to Expect from Attack Surface Management?
For covering all internet facing risk, security teams need to master all assets they have both hosted in their own institutions and third party. An Attack Surface Management tool should firstly show all the assets that organization have. These may include all subdomains, IP addresses, dns records, cloud environments, certificates, uris, etc. And also, all the technologies that the organization is using in their DMZ like the number of the web servers, applications and operating systems.
All these findings are critical for Shadow IT. Systems that are forgotten and not patched for long time causes big risks for the organizations.
The other important – and maybe the most important – output is the vulnerabilities of the systems. Attack Surface Management tools use tools like Shodan to investigate the vulnerabilities and unlike a vulnerability scan, Attack Surface Management may show the misconfigurations like forgotten default IIS or Apache web pages, default Admin panels, and again, IIS or Apache misconfigurations. Also missed configurations like DMARC records or A records can be important seeds for attackers for their discovery phases.
As we told above, it is critical to scan all of these assets often, and it is preferable to do it every day. The fact that this scan can be done every day and the scan time is also very important.
The number of tools that an Attack Surface Management product uses in passive scanning is also very important. Do not forget that the main purpose here is to perform the discovery phase of the attacker, and a vulnerability or misconfiguration that you cannot detect after these scans, but can be found by the attacker, can lead to very bad results.
For risk management processes, all tasks including vulnerabilities and misconfigurations should be followed regularly. For providing this, like other vulnerability management solutions, Attack Surface Management tools should be integrated with ticketing systems in the organizations. Via this integration, tasks can be assigned to engineers and followed in a single system.