Category Archives: Architectural

CSA Announced Their 50 Trusted Providers

The Cloud Security Alliance (CSA) announced the selection of a first round of “trusted providers” for cloud security. CSA, a dedicated organization for defining best practices for cloud security, assumes that these trustmarks (will be displayed on each organization’s Security, Trust, Assurance and Risk (STAR) registery) will assist customers in identifying cloud providers that demonstrate their commitment.

There are some criteria that companies must follow to become a CSA Trusted Cloud Provides;

CSA’s co-founder and CEO, Jim Reavis said; “This new CSA Trusted Cloud Provider program builds upon CSA cloud provider certification to also quantify the credentialing of provider personnel and their contributions to industry projjects. This is intended to offer transparent B2B marketplace intelligence so business can better evaluate the security commitment and accomplishments of cloud providers.”

It was noteworthy that there are not many important security vendors which are already working for cloud security in the list. We will see how these trustmark will change the game in infosec world.

Email Security – A Buyer’s Guide

All IT professionals know that most of the cyber attacks begin with an email. Actually, according to statistics of Phishme Defense Guide 2017, 91% of cyber attacks began with an email. It is not surprise, since all we know that the human is the weakest part of cyber defense. If the users have not enough awareness – this might be the IT professionals’ fault of course – and especially, with today’s carefully designed phishing emails, users can easily download the malicious content or steal their identity. These phishing attacks and users make it easier to breach an organization for cyber criminals, instead of scanning the vulnerabilities on websites, and applying complex techniques to obtain the same gain.

As we told at the beginning, most of the successful attacks begin with phishing emails. Attackers may send a malicious content directly via email, or a link to a phishing site to download the malicious content or to a CnC server. Sometimes, to bypass the security devices, attackers may leave the back of the link empty at the beginning, then add the malicious content, so the users can download it once the link bypassed the security controls.

Traditional signature-based or reputation-based email security controls cannot stop these types of attacks. Signature-based controls cannot stop 0-day threats, and criminals uses unique malwares, URLs or phishing sites to bypass these signature-based security control mechanisms.

Most of the antispam solutions work like that, with including an antivirus to their solution. As explained, even if it is not enough, spam is also a very big problem against organizations, since more than 90% of emails reaching to an organization are spam emails. So, while choosing an email protection solution, antispam feature is one of the most important capabilities you have to check. If you do not stop known spam mails, it will be very difficult to combat against more sophisticated email attacks while trying to manage too many spam messages.

Feature 1: Antispam
As explained top, more than 90% of emails are spam in an organization. Most of these spam mails do not contain malicious content and just contain information about a sales campaign. Reputation database mostly used in cloud with intelligence of the vendor and other customers’ feedback. So, both intelligence capability and size of the customer becomes important for the vendor. It is important to note that some vendors use different black lists for more protection.

Also, antispam engine must be tested carefully, especially if the most used language in the organization’s is not English. Tool’s antispam engine capability may differ for different languages.  

Although spam mails are not very dangerous, they are annoying, due to the volume and content of some of them.A good antispam engine and reputation capability stopping these spam mails also provides a better analysis chance on remeining emails by reducing the number of the emails with stopping at the edge.

Feature 2: Antivirus
Like antispam feature, a signature-based antivirus feature can stop most of the known malicious contents sent to organization. Different email security vendors use different antivirus solutions in their solution, so even if you do not trust directly to the antivirus feature, it is important to use a well known vendor’s solution here.

Feature 3: Sandbox
With today’s developing attack types and more aggressive and focused attackers, sandboxes became mandatory for organizations. I do not want to explain the features of the sandboxes here but today, a sandbox that analyzing emails became very important. In traditional antispam solutions, antivirus engine can only stop known malwares. Organizations need a sandbox for analyzing both unknown files and URLs. For suspicious URLs, masking feature also can be used. So, users’ direct Access to the suspicious URL could be blocked.

Sandboxes for email protection can be completely from a different vendor from antispam solution, can be positioned after these antispam products to analyze remaining email after antispam or a cloud solution if the organization does not have a regulations against using cloud solutions.

Feature 4: Quick Response
Organizations receive thousands of alerts everyday. Most of the organizations do not have enough analysts to determine all these alerts whether they are true attacks or false positive. Even worse, most email security solutions do not give enough information to determine the alert. For responding quickly, a solution giving more detailed analysis about the content should be choosen.

Feature 5: End User Quarantine
One of the worst parts of email security gateway solutions is the false positive rates. Since attackers create more realistic emails to cheat the users, stricker rules may be required. The stricker rules mean more false positives. Emails required fort he work of the user lso begin to be blocked. Of course, this situation leaves IT professionals in trouble. This situation leads they leave all their important tasks and have to spend time clearing emails from quarantine. So, the end user quarantine feature that allows the user to manage their own quarantine and release the emails they think is clean is as important as the false positive rate.

One bad thing about end user quarantine, users can really release suspicious emails to themselves. So, this feature should be used very carefully. Workload or security? One more thing to think on for deciders.  

Feature 6: Scalability
With developing business models and growing organizations, scalability is always an important point. Not only email protection but also all security products should be scalable. These should be especially discussed during PoCs. Again, for the organizations do not have a regulation agains cloud usage, scalability becomes easier for native cloud solutions.

Questions To Ask
While choosing an email security gateway product, it is better to ask these questions to the vendors;
1-    Does the solution use multiple Technologies also including AI?
2-    Does the solution provides intelligible reports against suspicious or malicious activities for responding quickly?
3-    Which technologies does the solution have for identifying 0-day attacks?
4-    What is the false positive rate of the solution?
5-    Is the solution fed from any intelligence source?
6-    What is the quality of these intelligence sources?
7-    Can it be quickly updated against new threats?
8-    What is the success rate in preventing suspicious URLs?
9-    Can the solution share threat information with other security tools positioned at the organization?
10- What is the scalability capacity of the solution?

Is DLP Dead?

DLP is a technology we use more than one decade. The starting point of DLP was protecting IP (Intellectual Property) of the organizations and became very popular for too many sectors. Organizations spent, and still spending millions of dollars for DLP solutions, to protect their private data. However, Gartner says; “They become an annoying or toothless technical control rather than a component in a powerful information risk management process” about it. But why?  

According to some surveys, the biggest challenge of the professionals is difficulty to keep policies up to rate of business. The others are that inhibition of the employee productivity because of these policies, and limited data visibility. Also, too many false positivies are also very big problem for IT professionals.

If we talk step by step, requiring policies is really one of the biggest problem of DLP solutions, regardless of manufacturer. Before anything else, organizations have to know what data they must protect. For this, they have to know which data is sensitive for the organization. Most of the organizations started their DLP Project without knowledge of their sensitive data. It is very clear that it is impossible to know what the sensitive data is without data classifications. Again, most of the organizations learnt that after implementing the DLP, and started a data classification Project maybe years later. And of course, only starting or implementing a classification Project is not enough to classify the data. It is a very broad and continuous process, needs wide awareness by users.

So, because of this obscurity about their own data, organizations got their policies from others’ experiences, instead of their own needs. Industry experience became very important at this step then. Created and run the policies with hoping they will protect their data.

At the same time, just knowing what to protect is not enough, also you must know how to protect these data. If you do not know which channels can people use to leak data, it is also impossible to protect it. These channels also added the policies according to industry experiences. Even if the Security Risk Management professionals know what if they miss a required policy, they run these with the with the thought of preserving as much as they protect. Everybody knew that this is not enough for protetion all the data, then the slogan became like; “DLP prevents the user from doing wrong things, does not prevent the data leakage against the malicious users.”

One of the other weaknesses of DLP is focusing on content to identify the data. Even if the last features like AI, it determines the file with the content of it, using pattern match (like regex) or exact match. Very limited context examination is used. So, DLP is not effective against malicious users again since the conten can be changed very easily to leak, also in a living organization, the content of the sensitive data will be changed inevitably, and this situation requires that policies are constantly updated. But as I said before, new policies means that more possibility to inhibition of the employee productivity, more spending time to optimize these rules and more exclusions. More exclusions mean more vulnerability against data protection. More context focus is needed to prevent the data.

In big organizations, false positives are can be the biggest problems since number of employees, sensitive data and policies. A large number of incidents produced every day, requires more time, and sure more employee to review these incidents. And if you make a survey with these teams who are viewing the DLP incidents, they could say hundreds of incidents could be ignored. Actually, I believe that it would be a good situation if the organization can catch one or two real incidents in a year. The organization hopes that the captured incidents gets an acceptable ROI. Meanwhile, this organization never can be sure that nobody leaked any data.

 Every IT Professional that used DLP know that there are many other annoying situations of DLP. For example, if you do not want someone leak data using endpoint channel like printer or USB, every PC needs an agent installed, and of course these agents should work as it should. This is a very big challenge against all IT personnel managing endpoint solutions. These requires focusing very strange situations, spending too much time on one PC sometimes, in a case of a problem, and a continuous testing of the agent. Not only incident analysis, also management of the DLP solution requires really many sources.

One last thing I want to mention, DLP inspects only at the point of egress. On the endpoint, printer or USB, in network layer; the internet access and in email channel, the emails sending outside of the organization. Data protection must also include inside the network like file servers. As we saw that the protection at the egress point is difficult and can be possibilities to leak the data (this can be because of policies, an agent with a problem, changing the content of the data, etc.), this item becomes very important.

As the result, DLP is not an efficient solution as expected. It must be continuous process, not a single Project by it is own. Despite all these, I do not believe that DLP will die. At least, in many countries, there are many regulations in different industries, DLP is compulsory. Regulations are requiring to have a DLP solutions including both endpoint, network and email channels. And still we do not have more efficient solution by itself. But, organizations must think to support their DLP solutions with some other solutions like UEBA or DaBA. Especially, DaBA solutions can provide a complete visibility of the movement of the sensitive data, in all over the network. Even if the users do not try to leak data outside the organization (so, it is impossible to catch it then), it is very important to know who is using this data in organization. So, the data can be followed with the need to know approach. If someone does not need a data for his job, he should not reach to this data. UEBA and DaBA solutions can provide this visibility and add a new layer to data protection mechanism.

Fall of A Hero – Rise and Fall of AV

All IT Security experts surely faced with such situations that anyone who does irrelevant with security, know only AV about computer security. AVs was the hero of our security for long times.

Legendary Times
AVs begin their advanture as signature based protection against known viruses and worms. With the development of the threats; fisrtly with script kiddies, than financially motivated hacker groups, it was enough to update signatures weekly or every few days. Today, things work a little differently. Actually, much more differently. As mentioned in the “A Guide to Choose EDR” blog before, with the explosion of the connectivity between PCs and mobile devices, usage of cloud more day by day, threats have also changed. Attackers now have the ability to bypass signature based detection and protection technologies. For dealing with these situations, heuristics detection skills have been added to AVs. Machine learning and behavior monitoring added for detecting and blocking suspicious behaviors. Also, AV vendors added host based IDS/IPS, hostbase firewall and device control skills, and these features become very useful for admins, to use all of these features within one agent, while this agent is already deployed in all PCs.

Fall of the Hero
Despite all these new features, researches conducted after 2018 say that AV products misses more than %50 of attacks. Besides, false positives caused by constantly updates causes difficult situations fort he IT professionals.
Everyone accepts that there is no any solution providing %100 security. With this approach, speed of response and visibility become the key features against threats. This is the reason SIEMs must be used to complement to AV. Yet there are also caveats that it is not enough and advanced tools like endpoint detection and response (EDR) solutions must be implemented alongside AV. That must be true, at least we see that AV vendors are also now developing such solutions beside their AV solutions. You can access to a more detailed review of EDR solutions; what they must include and how to choose them here.

What is next?

Now, AV vendors thich are also developing EDR solution, suggest that the customers must implement these solutions beside the AV. Meanwhile, the vendors developing only EDR solutions, or vendors which entered endpoint field with EDR, say that customers can change their AV with EDR solutions peace of mind. But is it so easy to replace AV with EDR? Or simply, is it easy to change any AV with something else.
As mentioned before, companies are now using their AV agents for device control, host IDS/IPS, host firewall, application control and whitelisting. For replacing the AV, the new product must support these features, even if the solution is very success in detecting and responding. Meanwhile, event if the solution has these features, there are too many policies, rules and exceptions for all. I am sure all IT Professionals will be afraid of this replacement since these policies. Until overcoming these problems, it seems better to use EDR beside AV solution. For now, it is also important to use a vendor that have enough working experience with commonly used AVs.