Redline is a free tool for investigation malicious activity through memory and file analysis. It has a lot of features for investigation but in this post, we will only mention searching for IoCs in the endpoint with Redline.
In previous post, we created an IoC to detect WinSCP.exe. Now, we will search it with Redline as the example.
We will go on with “Create an IOC Search Collector” menu in the main page of Redline. For doing this, we browse the folder that including IoCs we want to search in the PC. We have only one IoC here but if you have more IoCs in the folder, you will see all of them in “Indicators” tab.
Then we create a folder for IoC Collector and after clicking “Next” button, we show this folder. Redline creates the IoC Collector in this folder. We will now use RunRedlineAudit.bat file with the command line. Once the bat file finishes running, it will create a folder called “Sessions” and save outputs to this folder in the same directory.
Just run the “RunRedlineAudit.bat” file and wait for finishing. Then, open the “Sessions” folder. Each IoC sweep placed in its own folder calle “AnalysisSessionX”. This was our first sweep, so we click on “AnalysisSession1” folder. Our IoC report will be in “AnalysisSession1.mans” file. So, we click on this file, and it will take some time it generates the report.
When IoC report generated, we can see it on Redline tool, “IOC Reports” tab. As you can see in the screenshot, our WinSCP Indicator IoC got hits. When we click on it, we can see why this IoC got hit. Here, our IoC catch the file with its MD5 hash value and file name. With clicking on “View Details” button, we can see more details about the hit.
An effective threat hunting is critical because it is hard to think like attackers and to search for the unknown in an enterprise network. This post may help organizations for an effective and successful threat hunting.
Knowledge of Topology and Environment
The purpose of threat hunting is to find the anomalies and their sources in the network and endpoint. So, a threat hunter should know what is normal and so can understand what is not normal.
From the risk management point of view, critical assets – servers, applications, data – should be known to protect more effectively. With the knowledge of the environment, the threat hunter knows the critical assets and hunts according to this. If there is segmentation in network, it is also critical to know the network topology and networks – or vlans – of these critical assets.
It is also necessary to know which application is running on which operating system, so the threat hunter can know the weaknesses of the system and can search according to these weaknesses.
Effective Endpoint Management
For threat hunting, the most used tools are EDRs. Organizations should be sure that they installed endpoint security tools to all endpoints and detect when they removed or stopped. Asset management is something more than CMDB. It must be managed by security teams whose understanding the criticality of the lack of endpoint security tool in an endpoint.
Threat intelligence is one of the most important feeds of threat hunting. Threat hunters need to have most recent intelligence and IoCs so they cant perform hunting the latest threats. Many malicious are produced and detected every day. This situation causes a lot of noises about intel. To avoid this noise, hunters should get valuable intelligence about their organization’s sector and geolocation, and integrate these IoCs with SIEM, EDR, etc..
We all know that new tactics and techniques are created by attackers in general. The most important reason for this is while security professionals in organizations have to deal with so many different things, attackers can only focus on their target. Even if it so important to have valuable/experienced personnel, if they are dealing with different organizational missions while they are working, it will be difficult for them to think like an hacker and detect the unknown in the network. Threat hunters should focus to their mission to create their methodology and hunt. So, there should be dedicated personnels for threat hunting.
Coordination Across the Organization
Yes, threat hunters work in a strange mission, think like a hacker and search across the network but they must not work alone. A threat hunter should have a good relationships with key personnel in IT departments like network and system admins, help desk personnel and so on. With these relationships, they better understand the network, systems and more importantly the company’s and personnels’ way of doing business. For organization’s perspective, when a threat hunter finds a weaknesses during the hunting process, they inform critical IT personnel for remediation. This team working will result as a success in remediation phase of the incidents or weaknesses.
Intelligence is critical for hunting for the known threats but hunters should be familiar with the TTPs of the attackers against the zero day threats. Threat hunters also should be aware of the updated or newly TTPs. Only with this knowledge hunters can act like an attacker. TTPs are at the top of Pyramid of Pain (defined by David Bianco).
To disclose the anomaly or malicious activity, threat hunters should use advanced tools like EDR, NDR, SIEM, FIM, etc.. These tools will help hunter to find abnormal activities if configured properly. In different posts, we tried to explain why they should be used.
Although Threat Hunting is nothing new, it is a very hot topic lately. Even if you have perimeter and endpoint security devices and SIEM for collecting and correlating logs from them, it is not a good way to wait for incidents coming. Without Threat Hunting, dwell time is increasing more than 150 days, and this is not acceptable anymore. While attackers are working proactively and developing new techniques day by day, security teams need to be more proactive too. Threat Hunting is the most proactive approach in an organization’s security structure and improves its security posture.
Dwell time is the dirty metric nobody wants to talk about in cyber security. It signifies the amount of time threat actors go undetected in an environment, and the industry stats about it are staggering. Source: Extrahop
In its simplest definition, Threat Hunting is detecting abnormal activities on endpoints and network. But, what to look for hunting threats?
What to Hunt?
Threat Hunting is a continuous process. Hunters should check anything that could be an evidence of an incident.
Processes: Processes are important components of OSs. Adversaries may inject malicious code into hijacked processes. Therefore, hunters should check processes and child processes regularly.
Binaries: Hunters should check binaries with their checksum, name and other specifications.
Network: Network activities to specific destinations and anomalies in network should be checked.
Registery: Hunters should check registery key additions and modifications.
For a continuous hunting, organizations need to have threat hunters in their CSIRT. The difference between analysts and threat hunters is the proactive approach as mentioned before. Also in smaller organizations, SOC analysts may work on threat hunting but actually, a threat hunter may has more specifications than an analyst. In larger organizations, it is important to have a dedicated threat hunting leader and team. This team should has detailed knowledge about;
OSs: The Threat Hunting team should have knowledge about OSs that organization is using. This knowledge must include process structures, files, permissions, and registery depending on the OS. This is important because malicious files and attackers make changes in OS here. A threat hunter need to understand what is normal and what is not. Something is not normal could be a sign of an intrusion. For having this knowledge, baselines could be created for all critical systems. These baselines will help to know the normal and the anomalies.
Apps: Threat Hunters should have knowledge about the applications used in the organization. It is also important to know perimeter and endpoint security devices and applications used by organization.
Business: Threat Hunting team should have knowledge about the organization’s business so they need to follow adversaries working on the organization’s sector and geographical location. It is also important to know third party companies the organization works and communication ways with them.
Network: In a big and segmented network structure, it is important to know where the critical assets are.
TTPs: IoCs are important components for hunting but they provide to detect “known knowns”. TTPs are at the top of Pyramid of Pain (defined by David Bianco) and especially adversaries’ techniques and tactics should be known those are threatening the organization’s sector and location.
Threat Hunting Tools: In CSIRT plan, it needs to be included that which tools and techniques can be used for threat hunting. Threat hunters should have knowledge of these tools and techniques.
IR&H Plan: Threat hunting is only a step of proactive approach. If threat hunters successfully find an intrusion or anomaly in systems, they need to know the next step. Who should they inform? What should be done?..etc..
Threat Intelligence: Threat intelligence is one of the most important feeds of threat hunting. Threat hunters need to have most recent intelligence and IoCs so they cant perform hunting the latest threats.
EDR: Threat hunters need IoCs but also need to know how to use these IoCs. After gathering the most recent IoCs from TI platforms, an IoC sweep must be made on endpoints.
NDR: Just like endpoints, network traffic also need to be checked with the latest IoCs. For doing this, CSIRT need to collect all east-west and south-north network traffic. NDR devices those have AI capabilities also detect anomalies in the network.
SIEM: Depending on the hunt’s scope, the threat hunter may need to check IPS/IDS, proxy, DNS, firewall or some other tools’ logs. Because logs are coming from different sources, CSIRT need to collect and correlate these logs in SIEM and feeding SIEM with the latest IoCs, these logs will more meaningful.
FIM: We said that baselines must be created for critical systems. FIM solutions will help CSIRT to create baselines for OSs and alert analysts when an unauthorized transaction is made.
Last week, a friend called me, gave some bad news about a company. The company was looking for help since they became a victim of Egregor ransomware and trying to learn what to do against attacker since the attacker got their all data, encrypted it and gave three days to be paid 500k dollars. The attacker threatened them to publish their data in public in three days. Meanwhile, the only problem was not that all data would be published publicly, but also they lost all their private data. But, how can it be possible?
Ransomware is the biggest problem of the cyber world for some years. We heard about it, work on it and have seen paid bitcoins too much in these years. There are tens (or maybe hundreds) of webinars, talks and articles about it, trying to help about being safe against ransomware. It is ok while the weakest link is human, it is possible to be exposed a ransomware but it is not too difficult to confine it to a small area.
The company I told about above was a chemical company and of course has too many private data like formulas. I mean they also lost their backups while I am saying they lost all their data. Since, they did not isolate their backup network, their backups was also being encrypted. Meanwhile, they have some backup tapes but cannot use them because they have never tested whether the backup tapes working, and of course they did not when the company need them.
There are some basic prevention steps against ransomware. If we mention briefly, we can say user awareness, regular phishing tests, not only an anti-spam product but also a sandbox or another technology against malicious emails, EDR to response faster against a malicious behavior, NDR to determine the anomaly in the network, to backup data and test these backups regularly, to isolate backup network so infiltrated attackers cannot harm backups, to isolate private data and apply need to know, to limit users’ internet access, and more. the list seems too long but most of them do not require much expenditure. But it if you do not invest to professionals and to any technology, then you just prodigalize your money. However, you can never count lost reputation and also secret formulas and data.
All these measures can take too much. I can understand if a company cannot invest all of them for security. But as I said above, this company’s backup network is not isolated and can be accessed from all other networks. And, as I learnt, they only use an antivirus software but it is not up to date, and I am sure they do not track whether all PCs or servers have this antivirus. So, like these measures, most of them are not expensive. To have these measures at least, every company needs to invest talented security professionals to save money. However, I think, any of these measures cost more than 500k$ + reputation + publicly published private data. To invest security is not wasting money. It is directly saving money. Everyone needs to understand this without living.
With increasing remote workforce process during Covid-19, clients are now more independent with their laptops and mobile phones that also being used for personal usage besides organizational usage. Not only workers, but also computers and data are now outside the organization and most of the protection layers such as firewall and IPS. Vulnerabilities and attacks continue to surface but remote users’ connection and VPN also create troubles for IT teams to patch and protect the clients.
Host-based firewalls help security professionals in many subject. Firstly, it controls the incoming and outgoing traffic, so becomes a very critical defense layers. For example, security pros may want to block all inbound connection to the client host initiated from outside. This is a very basic protection on some kinds of malwares.
Some sensitive applications like Swift, should be isolated from the network in the organization. But sometimes it is not an effective solution to create vlans for a few PCs or servers. For the situations like that, host-based firewalls are true saviors. These sensitive PCs and servers can be isolated with host-based firewalls. Since most of the antivirus agents also includes host-based firewall feature, this solution becomes easier and more logical.
Meanwhile, organizations should not make it a habit. In large organizations having thousands of PCs and servers, this isolations on host-based firewalls can easily turn into a nightmare, since hundreds of rules have to be written. If these rules are not followed properly, also with the circulation in the administration team, a possible mixed rules can harm PCs and network more than they protect. It is inevitable that every rule for each small groups makes the rule list more complex, and it becomes more possible to make a mistake while adding new rules.
So, even if host-based firewalls are important and valuable solutions, they should be used as needed, and should not be a regular solution for isolation situations.
Last week, I hearth that an organization did not add antivirus agent to their PC image. They are formatting the PC with their image, then connecting to the network and waiting for the sccm installing the antivirus software to the PC. Also, for remote users working on the field, some contracted partners are formatting the PCs since these users cannot come to the company, they then join to the network via VPN after formatting and keeps working. Meanwhile, the IT team is waiting for the sccm install the antivirus software, but because of the VPN network, most of the time it fails. PC keeps working on the network for days.
While I was sharing this situation with some friends in the industry, some of them also said that it is a normal process for the organizations. So, I wanted to write this article.
A few months ago, I shared a post about falling of the AV. It is true that AV softwares are not very efficient in recent years. There are many other measures need to be taken to protect the endpoints. However, most of these measures are for APT attacks. As everyone says, and also I touch in the article, attackers’ profile and techniques has changed a lot, since the times AV was popular and successful. But, despite all these situations, nobody can say that AVs are not necessary anymore. Organizations does not face attackers that using highly advanced techniques and tools only. There are still many script kiddies and those trying to learn hacking. These people are always looking for easy vulnerabilities to hack. It is very great possibility they find you.
Another subject about AV, because of the hash databases downloaded, they can protect users for many of the malicious events, also while they are offline, or while they are not connected to the office.
Even, most of the AV softwares are improving themselves with behavioral and AI capabilities. So, these can also detect and stop some of the APT attack phases.
I am also curious your comments, but my opinion is an AV is still indispensable for all organizations. So, I want to some best (must) practices for using AV in an organization;
– An AV software should be installed on all devices. Clients should be periodically followed whether has AV on it or not. If it is possible, a NAC solution should be positioned and PCs that does not have AV should be blocked.
– AV solution should be centrally managed. So, updates can be managed centrally and out of date clients can be followed.
– Administrators should make sure all clients are sending logs properly. It is very important to response a suspicious situation quickly.
– AV software should be updated periodically. Meanwhile, administrators also should be sure that all clients are getting the latest updates properly.
– AV software should be included into the PC and servers’ regular images. When a PC formatted and re-installed, it should include AV before connecting the network.
– Users should not be able to disable the AV services and agent. Tamper protection and an uninstall password should be used and should be stored in a password management system.
– Malicious files should be blocked and quarantined to be analyzed by the administrators.
– Audit logs should be collected properly. Administrators should login to the software only by their own usernames. Generic usernames should not be used.
– Too many exceptions should not be given. If needed, exceptions should be given only as stated by the vendors.
– If including, host-based IDS should be enabled on the AV agent.
All IT professionals know that most of the cyber attacks begin with an email. Actually, according to statistics of Phishme Defense Guide 2017, 91% of cyber attacks began with an email. It is not surprise, since all we know that the human is the weakest part of cyber defense. If the users have not enough awareness – this might be the IT professionals’ fault of course – and especially, with today’s carefully designed phishing emails, users can easily download the malicious content or steal their identity. These phishing attacks and users make it easier to breach an organization for cyber criminals, instead of scanning the vulnerabilities on websites, and applying complex techniques to obtain the same gain.
As we told at the beginning, most of the successful attacks begin with phishing emails. Attackers may send a malicious content directly via email, or a link to a phishing site to download the malicious content or to a CnC server. Sometimes, to bypass the security devices, attackers may leave the back of the link empty at the beginning, then add the malicious content, so the users can download it once the link bypassed the security controls.
Traditional signature-based or reputation-based email security controls cannot stop these types of attacks. Signature-based controls cannot stop 0-day threats, and criminals uses unique malwares, URLs or phishing sites to bypass these signature-based security control mechanisms.
Most of the antispam solutions work like that, with including an antivirus to their solution. As explained, even if it is not enough, spam is also a very big problem against organizations, since more than 90% of emails reaching to an organization are spam emails. So, while choosing an email protection solution, antispam feature is one of the most important capabilities you have to check. If you do not stop known spam mails, it will be very difficult to combat against more sophisticated email attacks while trying to manage too many spam messages.
Feature 1: Antispam As explained top, more than 90% of emails are spam in an organization. Most of these spam mails do not contain malicious content and just contain information about a sales campaign. Reputation database mostly used in cloud with intelligence of the vendor and other customers’ feedback. So, both intelligence capability and size of the customer becomes important for the vendor. It is important to note that some vendors use different black lists for more protection.
Also, antispam engine must be tested carefully, especially if the most used language in the organization’s is not English. Tool’s antispam engine capability may differ for different languages.
Although spam mails are not very dangerous, they are annoying, due to the volume and content of some of them.A good antispam engine and reputation capability stopping these spam mails also provides a better analysis chance on remeining emails by reducing the number of the emails with stopping at the edge.
Feature 2: Antivirus Like antispam feature, a signature-based antivirus feature can stop most of the known malicious contents sent to organization. Different email security vendors use different antivirus solutions in their solution, so even if you do not trust directly to the antivirus feature, it is important to use a well known vendor’s solution here.
Feature 3: Sandbox With today’s developing attack types and more aggressive and focused attackers, sandboxes became mandatory for organizations. I do not want to explain the features of the sandboxes here but today, a sandbox that analyzing emails became very important. In traditional antispam solutions, antivirus engine can only stop known malwares. Organizations need a sandbox for analyzing both unknown files and URLs. For suspicious URLs, masking feature also can be used. So, users’ direct Access to the suspicious URL could be blocked.
Sandboxes for email protection can be completely from a different vendor from antispam solution, can be positioned after these antispam products to analyze remaining email after antispam or a cloud solution if the organization does not have a regulations against using cloud solutions.
Feature 4: Quick Response Organizations receive thousands of alerts everyday. Most of the organizations do not have enough analysts to determine all these alerts whether they are true attacks or false positive. Even worse, most email security solutions do not give enough information to determine the alert. For responding quickly, a solution giving more detailed analysis about the content should be choosen.
Feature 5: End User Quarantine One of the worst parts of email security gateway solutions is the false positive rates. Since attackers create more realistic emails to cheat the users, stricker rules may be required. The stricker rules mean more false positives. Emails required fort he work of the user lso begin to be blocked. Of course, this situation leaves IT professionals in trouble. This situation leads they leave all their important tasks and have to spend time clearing emails from quarantine. So, the end user quarantine feature that allows the user to manage their own quarantine and release the emails they think is clean is as important as the false positive rate.
One bad thing about end user quarantine, users can really release suspicious emails to themselves. So, this feature should be used very carefully. Workload or security? One more thing to think on for deciders.
Feature 6: Scalability With developing business models and growing organizations, scalability is always an important point. Not only email protection but also all security products should be scalable. These should be especially discussed during PoCs. Again, for the organizations do not have a regulation agains cloud usage, scalability becomes easier for native cloud solutions.
Questions To Ask While choosing an email security gateway product, it is better to ask these questions to the vendors; 1- Does the solution use multiple Technologies also including AI? 2- Does the solution provides intelligible reports against suspicious or malicious activities for responding quickly? 3- Which technologies does the solution have for identifying 0-day attacks? 4- What is the false positive rate of the solution? 5- Is the solution fed from any intelligence source? 6- What is the quality of these intelligence sources? 7- Can it be quickly updated against new threats? 8- What is the success rate in preventing suspicious URLs? 9- Can the solution share threat information with other security tools positioned at the organization? 10- What is the scalability capacity of the solution?
All IT Security experts surely faced with such situations that anyone who does irrelevant with security, know only AV about computer security. AVs was the hero of our security for long times.
Legendary Times AVs begin their advanture as signature based protection against known viruses and worms. With the development of the threats; fisrtly with script kiddies, than financially motivated hacker groups, it was enough to update signatures weekly or every few days. Today, things work a little differently. Actually, much more differently. As mentioned in the “A Guide to Choose EDR” blog before, with the explosion of the connectivity between PCs and mobile devices, usage of cloud more day by day, threats have also changed. Attackers now have the ability to bypass signature based detection and protection technologies. For dealing with these situations, heuristics detection skills have been added to AVs. Machine learning and behavior monitoring added for detecting and blocking suspicious behaviors. Also, AV vendors added host based IDS/IPS, hostbase firewall and device control skills, and these features become very useful for admins, to use all of these features within one agent, while this agent is already deployed in all PCs.
Fall of the Hero Despite all these new features, researches conducted after 2018 say that AV products misses more than %50 of attacks. Besides, false positives caused by constantly updates causes difficult situations fort he IT professionals. Everyone accepts that there is no any solution providing %100 security. With this approach, speed of response and visibility become the key features against threats. This is the reason SIEMs must be used to complement to AV. Yet there are also caveats that it is not enough and advanced tools like endpoint detection and response (EDR) solutions must be implemented alongside AV. That must be true, at least we see that AV vendors are also now developing such solutions beside their AV solutions. You can access to a more detailed review of EDR solutions; what they must include and how to choose them here.
What is next?
Now, AV vendors thich are also developing EDR solution, suggest that the customers must implement these solutions beside the AV. Meanwhile, the vendors developing only EDR solutions, or vendors which entered endpoint field with EDR, say that customers can change their AV with EDR solutions peace of mind. But is it so easy to replace AV with EDR? Or simply, is it easy to change any AV with something else. As mentioned before, companies are now using their AV agents for device control, host IDS/IPS, host firewall, application control and whitelisting. For replacing the AV, the new product must support these features, even if the solution is very success in detecting and responding. Meanwhile, event if the solution has these features, there are too many policies, rules and exceptions for all. I am sure all IT Professionals will be afraid of this replacement since these policies. Until overcoming these problems, it seems better to use EDR beside AV solution. For now, it is also important to use a vendor that have enough working experience with commonly used AVs.
As spoken in all security events in last decade, the attacker’s purposes and methods have changed greatly and become more complex. As if this is not enough, increase in the number of the mobile devices used in the organizations and moving some (or most of the) services to cloud made endpoints’ protection more difficult. With the expanded cloud usage and development of the mobile technologies, more users are coming less to the Office. This situation makes management, monitoring and protecting more difficult for the endpoints.
As said at the beginning, with the new advanced techniques of the attackers (like advanced malwares, fileless malwares or exploits), it is very difficult to protect endpoints with only traditional endpoint security solutions. Neverthless, as the subject of another discussion, this does not mean that signature based antivirus, host IPS, host firewall or other conventional endpoint security solutions meaningless anymore. Meanwhile, while talking about traditional security solutions, we have to touch what endpoint is. Because, attack surface increased against today’s organizations, since IoT and OT are parts of the endpoints’ of them. Now, we need to expand endpoint security solutions as covering mobile phones, POS, wearable devices, sensors, cameras, HVAC, and cars, since they can access both to internet – even if confined with the cloud – and organization’s network, wherever they are and whenever they want. According to this traditional endpoint security solutions, EDR solutions have malicious activity detection, containment of the endpoint, investigation of the incident and remediation capabilities fort he endpoint. With this capabilities, they can reduce the impact of an incident in the organization and provides intelligence for responding faster. EDR systems use an agent on each endpoint system. EDR vendors feed these agents with their intelligence services, global customer data, firewalls, network and/or e-mail based APT devices, etc. With these intelligence data, the agent provides deep and real-time monitoring on the endpoint, discovery and response. An EDR system must use at least a few monitoring methods such; IOC Detection, means that the agent is comparing the system changes with its Indicator of Compromises. This IOCs can be feed from other devices of the vendor in the same network, or global customers and intelligence services. Anolmaly Detection, means checking the system for anormal states. Behavior Detection, means checking the system for bad or malicious behaviours. Machine Learning and AI, means that the solution has the ability to determine the malicious activities without being explicitly programmed. For an effective tussle against threats, time is the most important thing. Your EDR solution should help you detect, investigate and response as quickly as possible. For doing this, first, your EDR should detect the threat as soon as possible. Right here, the power of the tools mentioned above shows the importance of their capabilities. Power of the intelligence services of your EDR solution’s vendor, shows the power of the IOCs. A vendor should feed customers’ EDR with fast and effective intelligences. Also, as community data, vendors can feed their customers with other customers’ known bad data. This means, bigger community helps you better. Also, integration with other security tools in the network is a key point. If an EDR solution can be fed with the other network tools, endpoints can be ready for the threats seen elsewhere, like in network traffic or in an e-mail. When we think that the malicious software reaches to the endpoint via e-mail or network channel, this feature becomes very important. With the integration and the advanced search capabilities in the EDR solution, a threat that seen in network anywhere can be catch quickly in the endpoint. From here, we also must see that an EDR solution must include an advanced search feature, searching the endpoint by many different options. These advanced searching options helps admin searching his clients against possible threats. An EDR solution must provides clear and meaningful explanations about the threats. Only determining a threat is not enough for admins. The solutions must help them responding to these threats. For responding quickly and correctly, admins must understand the content of the threat. Also, containment is an important feature fort he EDR solutions, a time-saving feature for the admins, while they are working on the threat. An endpoint has a malicious content should be contained during the analysis, so other endpoints prevented against spreading of this malicious content.
Also, for investigation, EDR solution must provide a full state output of the endpoint, for the timezone that the malicious thing happen. A full or specific memory dump information, states of the services, etc. An automatic creation of these information is critical during the investigation processes. From the experiences, I know that the endpoint is the most boring and difficult part of the security. Distribution problems, slowing machines after distributing, user complaints, etc. Most of the security admins don’t like to deal with the endpoint security. But as shown in most studies, thousands of threats are produced every day. Just protecting the network and a-mail channel is not enough for these new threats. We have to give the necessary importance to the endpoints. So, with all these problems of endpoints, choosing the right vendor becomes the most important thing. An endpoint security solution must not obstruct end users’ business. Even for security, business must go on. At this point, vendor’s experience is very important to choose. Also, getting a quick answer during a problem must be evaluated, during the selection process of the EDR solution.