Last week, a friend called me, gave some bad news about a company. The company was looking for help since they became a victim of Egregor ransomware and trying to learn what to do against attacker since the attacker got their all data, encrypted it and gave three days to be paid 500k dollars. The attacker threatened them to publish their data in public in three days. Meanwhile, the only problem was not that all data would be published publicly, but also they lost all their private data. But, how can it be possible?
Ransomware is the biggest problem of the cyber world for some years. We heard about it, work on it and have seen paid bitcoins too much in these years. There are tens (or maybe hundreds) of webinars, talks and articles about it, trying to help about being safe against ransomware. It is ok while the weakest link is human, it is possible to be exposed a ransomware but it is not too difficult to confine it to a small area.
The company I told about above was a chemical company and of course has too many private data like formulas. I mean they also lost their backups while I am saying they lost all their data. Since, they did not isolate their backup network, their backups was also being encrypted. Meanwhile, they have some backup tapes but cannot use them because they have never tested whether the backup tapes working, and of course they did not when the company need them.
There are some basic prevention steps against ransomware. If we mention briefly, we can say user awareness, regular phishing tests, not only an anti-spam product but also a sandbox or another technology against malicious emails, EDR to response faster against a malicious behavior, NDR to determine the anomaly in the network, to backup data and test these backups regularly, to isolate backup network so infiltrated attackers cannot harm backups, to isolate private data and apply need to know, to limit users’ internet access, and more. the list seems too long but most of them do not require much expenditure. But it if you do not invest to professionals and to any technology, then you just prodigalize your money. However, you can never count lost reputation and also secret formulas and data.
All these measures can take too much. I can understand if a company cannot invest all of them for security. But as I said above, this company’s backup network is not isolated and can be accessed from all other networks. And, as I learnt, they only use an antivirus software but it is not up to date, and I am sure they do not track whether all PCs or servers have this antivirus. So, like these measures, most of them are not expensive. To have these measures at least, every company needs to invest talented security professionals to save money. However, I think, any of these measures cost more than 500k$ + reputation + publicly published private data. To invest security is not wasting money. It is directly saving money. Everyone needs to understand this without living.
DLP is a technology we use more than one decade. The starting point of DLP was protecting IP (Intellectual Property) of the organizations and became very popular for too many sectors. Organizations spent, and still spending millions of dollars for DLP solutions, to protect their private data. However, Gartner says; “They become an annoying or toothless technical control rather than a component in a powerful information risk management process” about it. But why?
According to some surveys, the biggest challenge of the professionals is difficulty to keep policies up to date.at rate of business. The others are that inhibition of the employee productivity because of these policies, and limited data visibility. Also, too many false positivies are also very big problem for IT professionals.
If we talk step by step, requiring policies is really one of the biggest problem of DLP solutions, regardless of manufacturer. Before anything else, organizations have to know what data they must protect. For this, they have to know which data is sensitive for the organization. Most of the organizations started their DLP Project without knowledge of their sensitive data. It is very clear that it is impossible to know what the sensitive data is without data classifications. Again, most of the organizations learnt that after implementing the DLP, and started a data classification Project maybe years later. And of course, only starting or implementing a classification Project is not enough to classify the data. It is a very broad and continuous process, needs wide awareness by users.
So, because of this obscurity about their own data, organizations got their policies from others’ experiences, instead of their own needs. Industry experience became very important at this step then. Created and run the policies with hoping they will protect their data.
At the same time, just knowing what to protect is not enough, also you must know how to protect these data. If you do not know which channels can people use to leak data, it is also impossible to protect it. These channels also added the policies according to industry experiences. Even if the Security Risk Management professionals know what if they miss a required policy, they run these with the with the thought of preserving as much as they protect. Everybody knew that this is not enough for protetion all the data, then the slogan became like; “DLP prevents the user from doing wrong things, does not prevent the data leakage against the malicious users.”
One of the other weaknesses of DLP is focusing on content to identify the data. Even if the last features like AI, it determines the file with the content of it, using pattern match (like regex) or exact match. Very limited context examination is used. So, DLP is not effective against malicious users again since the conten can be changed very easily to leak, also in a living organization, the content of the sensitive data will be changed inevitably, and this situation requires that policies are constantly updated. But as I said before, new policies means that more possibility to inhibition of the employee productivity, more spending time to optimize these rules and more exclusions. More exclusions mean more vulnerability against data protection. More context focus is needed to prevent the data.
In big organizations, false positives are can be the biggest problems since number of employees, sensitive data and policies. A large number of incidents produced every day, requires more time, and sure more employee to review these incidents. And if you make a survey with these teams who are viewing the DLP incidents, they could say hundreds of incidents could be ignored. Actually, I believe that it would be a good situation if the organization can catch one or two real incidents in a year. The organization hopes that the captured incidents gets an acceptable ROI. Meanwhile, this organization never can be sure that nobody leaked any data.
Every IT Professional that used DLP know that there are many other annoying situations of DLP. For example, if you do not want someone leak data using endpoint channel like printer or USB, every PC needs an agent installed, and of course these agents should work as it should. This is a very big challenge against all IT personnel managing endpoint solutions. These requires focusing very strange situations, spending too much time on one PC sometimes, in a case of a problem, and a continuous testing of the agent. Not only incident analysis, also management of the DLP solution requires really many sources.
One last thing I want to mention, DLP inspects only at the point of egress. On the endpoint, printer or USB, in network layer; the internet access and in email channel, the emails sending outside of the organization. Data protection must also include inside the network like file servers. As we saw that the protection at the egress point is difficult and can be possibilities to leak the data (this can be because of policies, an agent with a problem, changing the content of the data, etc.), this item becomes very important.
As the result, DLP is not an efficient solution as expected. It must be continuous process, not a single Project by it is own. Despite all these, I do not believe that DLP will die. At least, in many countries, there are many regulations in different industries, DLP is compulsory. Regulations are requiring to have a DLP solutions including both endpoint, network and email channels. And still we do not have more efficient solution by itself. But, organizations must think to support their DLP solutions with some other solutions like UEBA or DaBA. Especially, DaBA solutions can provide a complete visibility of the movement of the sensitive data, in all over the network. Even if the users do not try to leak data outside the organization (so, it is impossible to catch it then), it is very important to know who is using this data in organization. So, the data can be followed with the need to know approach. If someone does not need a data for his job, he should not reach to this data. UEBA and DaBA solutions can provide this visibility and add a new layer to data protection mechanism.