Tag Archives: Detect

Incident Handling and Response to Insider Threats

Because an insider is an employee, is a trusted person and has access to various data, insider threats are major risks for organizations. Organizations are investing to prevent perimeter against external threat but focusing less on internal threats. This is the other factor that making insider threat more risky.

Attacks may come from different type of employees. These attackers may be system admins or managers who have authorized access to critical data, some unhappy or terminated employees, users who lost a device including sensitive data, or sending e-mail to incorrect receipints, or untrained personnel about security policies and best practices who subjected to social enginneering attacks.

All types of incidents require similar steps to respond. Here, we will try to explain the stages incident responders and actually whole organization must realize against an insider attack.

IH&R Steps for Insider Threats

EFFECTIVENESS OF INSIDER THREAT

Insider threat is a major risk because these kind of attack are very effective. It is difficult to detect and can go undetected for years. It is very easy to attack from inside since users have authorization to some data and systems, and can easily cover their actions by reaching to logs and deleting or modifying them. This makes also difficult to detect these type of attacks. Organizations need to monitor users’ behavior to detect and respond quickly.

As against all type of attacks, organizations need a well planned and regularly tested incident response plans to contain and eradicate insider attacks.

PREPERATION

The organizations must always be ready to an insider attack. Preparation stage is important to detect and respond these attacks.

  • Conduct security awareness trainings regularly to inform users against social engineering techniques. Insider attacks are not only done by malicious employees. Regular security awareness trainings will prevent your users with access to sensitive data from being used by malicious people.
  • Train users how to report any policy violation.
  • Classify organization’s data, identify the critical ones and apply need to know approach to reach to data.
  • Be sure all necessary logs are collected in SIEM.
  • Use privileged access management tools for storing passwords for all types of accounts reaching to critical data or production environment.
  • Make sure that terminated employees’ access rights are immediately removed both for logical and physical systems.
  • Deploy data loss prevention tools, but never trust that DLP will fully protect you. It is important to know the gaps of DLP tools to prevent data better. Make sure you read our post about DLP 🙂
  • Install NDR to detect abnormal behaviors of users. You can access our article explaining the importance of NDR against insider threat.
  • Install honeypot and honeytokens to lure attackers.
  • Segregate backup network from production or test networks and implement secure access methodologies to backup files.
  • Device control should be applied in the whole systems of the organizations. Users should not be allowed to use external storage.
  • Employees should sign a confidentiality and nondisclouse agreement bu Human Resources department.
  • Regularly and objective interviews and feedbacks from employees will help organization keep employees more peaceful.

DETECT AND ANALYZE

Indicators for insider threats are mostly abnormal behaviors of users. So, NDR with artifical intelligence technologies to detect anomaly in the network, UEBA, and Honeypot tools are critical to detect these type of attacks. The changes in network usage pattern may be indicator for ann insider threat.

It is important to collect logs in SIEM but in most cases, we saw in real life that huge amount of log data causes missing of malicious activity. It is more important to collect valuable logs and corralate them than collecting. Also, missing or modified logs may be indicator for insider threats. All log sources must be checked regularly to detect such an incident.

Accessing resources in unusual time and from unusal location may be indicator of insider threats. However, multiple login fail attempts may be used with these time and location information to cover unauthorized access attempts.

Users’ social media actions should be monitored. Unhappy and unmotivated users may try to post some unnecessary information about the organization.

Incident responders must analyze different logs from different sources after a suspicious activity has been reported. These logs may include IDS/IPS, proxy, NDR, EDR, DLP and email logs. They should check for a suspicious network connection and data transfers outside the network.

CONTAINMENT

For all types of attacks, containment is an indispensable stage for incident responders. It is fatally important to contain the source in question to prevent bad actors’ actions both laterally and outbound. Containment will minimizes the damages. Advanced EDR tools allows containment of such sources without having to be physically present near the source and incident handlers can still keep analyzing these sources while the threat could not be spread.

After detecting the malicious insider and containment, all privileges and credentials of this actor should be blocked, including e-mail and domain account and physical access cards.

ERADICATION

The organization should have an incident response plan and procedures to be able to move fast after an incident occurs. Eradication is also an important stage for incident handling and incident handlers should know in advance what to do in a case of insider attack by checking the policies and procedures.  However, eradication is not just CSIRT’s job. These are some processes all departments and emmployees must be involved. Malicious actor’s behaviors should be determined step by step and the preventive or detective control missings that allow her to do must be corrected. New security controls should be added and preperation stage should be reviewed again.

RECOVERY

The recovery stage must begin immediately after detecting, containing and eradicating the insider threat incident. If data is stolen and exfiltrated, incident responders should contact immmediately with the threat actor before selling or disclosuring it publicly.

Incident responders must be sure to gather ennough evidence for legal proceedings. This evidence will also help insurance processes.

In case the attacker placed malware or a backdoor inside the network, all systems should be checked carefully and all outbound connections should be checked against a C&C communication. A threat Hunting activity may be required.

If information is stolen and the stolen data is including user credentials, passwords should be changed whole over the organization.

POST-INCIDENT ACTIVITIES

This is one of the most important steps in incident Response. CSIRT should create a lessons learnt document after all incidents, this is also goes for insider threat incidents too. This lesson learnt documents will help organization preparing more effective to possible future incidents. In this stage, all the confusion caused by the incident will be gone and teams and responsible can identify what needs to be done for future readiness. Also, policies and procedures should be reviewed and changed if needed after lesson learnt works.

Also, all incidents and evidences should be documented properly to use in future.

Network Detection and Response

As organizations, and security teams, we purchased many security devices for providing both network and endpoint security. However, attacks continue at the same pace, even we faced bigger attacks last year, and they are getting more sophisticated. So, what is the next step for organizations?

NDR market guide was shared last year by Gartner. As the idea, NDR uses (or must use) artifical intelligence to detect malicious behaviors, both from external threat actors and insider threats.  This means, we will no longer just store the huge network traffic data via full packet capture products, but also detect the anomalies in these captured traffic. So, NDR will play a major role in helping security team to response quicker.

Unfortunately, the only obstacle to the teams last year was not the more sophisticated attacks, but also the “new normal” made security teams so hard. Now, more users are reaching to organization’s sources, getting e-mails and downloading files out of office. Control over users’ behaviors is less and less. When that happens, the need for more sophisticated technologies is also increasing.

MACHINE LEARNING

Even if we implement many security technologies to our structure, attacks are still going on. Now it is necessary to detect whether there is anyone inside as much as protecting the border. Honeypot technologies and EDRs have been used for this for years but these are not enough to decrease the dwell time. If you failed to prevent and detect an attacker inside your network, or an insider threat, it is always difficult to prevent data exfiltration, or your file from being encrypted.

Machine learning is the key here. The main idea is anomaly detection inside the network. The first step is to profile entire organization’s network and users’ and computers’ traffic. After having such a profiling, it will be easier to detect anomalies inside the network. Anomalies can be in different forms like data exfiltration to some rare destinations, uploading files to IP addresses without hostnames, login attempt from strange destinations (for cloud or vpn), and copying in large number of files from an smb share. We expect the NDR to catch all of that, of course more.

CLOUD

More otganizations are using cloud infrastructure more and more. Public, private, or hybrid, cloud infrasturctures are also a part of us. Critical files are stored and applications work there, and the responsibles are the customers for the data’s security.

Think of a scenerio like that; you have users storing files in cloud and they are working with a few of these files during their work. A user, has a permission to reach these files, downloading most of the files in a very short time, then resigns from work. Or, this user’s credentials have been compromised, and someone connected to your cloud from a country that non of your users normally connect and made anomalous behaviors. NDR must cover also cloud and detect these incidents. It is hard to implement a UEBA solution, thus, NDR can be implemented to detect insider threat.

THE NEW NORMAL

Most of the organizations were caught unprepared for Covid situation. Users had to work at home and connected to organizations’ network or cloud from home. That means, users can connect to internet less controlled. An NDR with endpoint capabilities will also cover users at home, corralate users behaviors with your network traffic and can detect threats.

A Quick Guide for Ransomware Protection

Unfortunately, ransomware problem is growing every day, although a lot of cases we hear and tens of articles and webinars are published about it. In this post, I try to explain the Protection processes against ransomware. Then, with more posts, I will try to explain every steps deeper.

If you have been exposed to it and your files are encrypted, there is nothing much to do. So, it may be important to read these measures.

  1. Asset Management: You must know all assets in your organization, especially all assets connecting to internet. Meanwhile, you must know immediately when a new device connected to your network. Additionally, devices using Outlook is also important. A device may be able to Access to internet with restiricted policies but it can get email from outside of the organization. Restriction policies on proxys and firewalls cannot work perfectly, and always have some problems on not categorized or newly websites. So, an asset management device and a NAC solution is very important to manage devices.
  2. Do not use RDP: Remote Desktop Protocol is a common method for attackers to remotely connect to systems, or move laterally and deploy malware. Protocols like Telnet, SSH, SMB, and RDP should not be open to the internet. You should continuously scan your public IP addresses to check whether there is a protocol like these open to the internet. If you still need to open, pay attention to these;
    1. Local admin accounts should be kept in safe with a PAM solution
    2. Change the default RDP port
    3. Implement IP restriction if possible
    4. Allow remote connection only with recording systems
    5. Multifactor authentication should be implemented.
    6. Network Level Authentication (NLA) can be activated on devices. NLA provides a pre authentication step and also protects the System against brute force attacks.
    7. Implement security policies via Group Policy, and deny local changes
  1. Disable administrative and hidden shares on clients:
  1. Block some file types for incoming emails: Block emails including executive files. IF there are some file types that you cannot block because of the business, you should you some measures like sandbox for incoming emails.
  1. Backup and regularly backup tests: If you lose your all sensitive data, it is very important to have usable backups. For this, firstly, you should separate and isolate your backup network from all others. So, in a situation of compromise, backup networks will be safe. If you lose your backup data too with all data, there is no any other way other than pay the ransom.

Separating and isolating the backup network is a good start, but it is not enough. You must regularly test backup data and should be ensure that they are working. If you have an unusable backup data when you need it, it only means you spent hundereds of gigabytes for nothing.

There is no a System protecting %100 against ransomware, so backups are becoming more critical in this situation.

  1. Patch your systems regularly: Especially, systems that are open to the internet should be patched quickly. For this, you should have test systems for all your critical systems and patch these tests firstly, then take action quickly for the production systems.
  1. Awareness: %91 of the attacks begins with email. For an attacker, it s very easy to deceive a user rather than trying to find weaknesses and exploit them. Even if you have hundreds of measurements agains cyber attacks, if one of your users accept and click a malicious email, it means you can be exposed.

Third Party Connections’ Security

Do you want your partners trust you directly? Well, do you trust your third party partners directly? When adversaries  are in, they always check different ways to reach more places. So, if one of your trusted third party connection got hacked, it means that there is just a short time they find your connection, and get inside if you did not make your connection secure.

Since 2018, we saw that attacks against third party connections increased. Most of them happened because of the small organizations that are giving support in any subject to larger organizations. Because of these small organizations’ low security budget, it is very difficult to secure the network and PCs for them. Most of these organizations do not have a domain structure, security devices for networks and even endpoint protection tools. What I saw while I am working with them that these type of organizations’ users are local admin in their laptops, and only using an antivirus agent to secure the rarely patched laptop. These laptops are being used to connect to other organizations, and sometimes to keep some sensitive data about of these organizations. They are very close to get hacked, but you must not get this risk while working with them.

Third Party Connection Management in Organizations

Especially in large organizations, since policies not working properly; and maybe since there is no any policy for third party connections, circulation of staff and sudden and fast developing projets, teams can create third party connections how it is easy at that time for them. This creates an unmanaged third party connections structure and so, it becomes worse day by day.

I remember we spent at least four months to fix the third party connections in a large organization. Dozens of leased lines reaching directly to different networks inside, hundereds of S2S VPNs established years ego, has certificates with low key sizes, and etc. Lack of a basic policy like third party connections policy causes a huge waste of time and effort to fix it.

What to Do?

Whoever you are connecting, or connecting to you, you should minimize threats. Because, all organizations are the target for hackers and they all can be hacked. You should not trust anybody else about security. You should understand what security controls they apply in the organization. If they have some weaknesses to determine the attacks made to themselves, it will put you at risk.

Create a 3rd party DMZ network. This is important because these 3rd party PCs should not connect to your network directly from any zone in your firewall. These PCs are something you cannot trust directly. So, at least, a 3rd party DMZ should be created to connect and control these type of connections. If there is no any 3rd party zone and policy, in a long period of time, with some of the activities explained at the beginning, you can see many different 3rd party organizations are connecting to your network from many different zones. And it will be something unmanagable day by day. For the beginning, I suggest to create a different zone for leased line connections to the internet facing firewall, and control these connections policies there. Also, a different firewall should be implemented for S2S VPN connections. It is important to receive these connections in a different firewall and control their connections.

You should use a vendor management program. It helps you to reduce the risk, by collecting more and more information about your third party connection and should be sure they comply with standards and regulations.

You should know what security controls, endpoint security (antivirus, EDR, encryption, etc.) and data leakage prevention methods do your third party connection imply to its users. Mostly, if you do not give a laptop to users that will connect to your network, third party organiztions’ staff uses their own or that company’s PCs. That means, these computer will be connected to your network most of the time, and these PCs will contain your some sensitive information. So, it is important to know whether they are protecting these PCs while working with you.

Screen recording is also a very useful tool. It is impossible to watch directly every consultant’s actions on your network. Most of the time they work on your test servers on test zones, and unfortunately, sometimes they can work directly on production zone or can reach to production zone because of the lack of controls. A screen recording tool will be an important deterrent action for you.

MFA is must. Multi-factor authentication should be used to connect your network. Mostly I suggest time based MFA tools to use. Any time a security incident occurs in 3rd party’s network, MFA will be important to secure you.

“MUST” Practices for AntiVirus

Last week, I hearth that an organization did not add antivirus agent to their PC image. They are formatting the PC with their image, then connecting to the network and waiting for the sccm installing the antivirus software to the PC. Also, for remote users working on the field, some contracted partners are formatting the PCs since these users cannot come to the company, they then join to the network via VPN after formatting and keeps working. Meanwhile, the IT team is waiting for the sccm install the antivirus software, but because of the VPN network, most of the time it fails. PC keeps working on the network for days. 

While I was sharing this situation with some friends in the industry, some of them also said that it is a normal process for the organizations. So, I wanted to write this article. 

A few months ago, I shared a post about falling of the AV. It is true that AV softwares are not very efficient in recent years. There are many other measures need to be taken to protect the endpoints. However, most of these measures are for APT attacks. As everyone says, and also I touch in the article, attackers’ profile and techniques has changed a lot, since the times AV was popular and successful. But, despite all these situations, nobody can say that AVs are not necessary anymore. Organizations does not face attackers that using highly advanced techniques and tools only. There are still many script kiddies and those trying to learn hacking. These people are always looking for easy vulnerabilities to hack. It is very great possibility they find you. 

Another subject about AV, because of the hash databases downloaded, they can protect users for many of the malicious events, also while they are offline, or while they are not connected to the office. 

Even, most of the AV softwares are improving themselves with behavioral and AI capabilities. So, these can also detect and stop some of the APT attack phases. 

I am also curious your comments, but my opinion is an AV is still indispensable for all organizations. So, I want to some best (must) practices for using AV in an organization;

– An AV software should be installed on all devices. Clients should be periodically followed whether has AV on it or not. If it is possible, a NAC solution should be positioned and PCs that does not have AV should be blocked.

– AV solution should be centrally managed. So, updates can be managed centrally and out of date clients can be followed. 

 – Administrators should make sure all clients are sending logs properly. It is very important to response a suspicious situation quickly. 

– AV software should be updated periodically. Meanwhile, administrators also should be sure that all clients are getting the latest updates properly.

– AV software should be included into the PC and servers’ regular images. When a PC formatted and re-installed, it should include AV before connecting the network. 

– Users should not be able to disable the AV services and agent. Tamper protection and an uninstall password should be used and should be stored in a password management system. 

– Malicious files should be blocked and quarantined to be analyzed by the administrators.

 – Audit logs should be collected properly. Administrators should login to the software only by their own usernames. Generic usernames should not be used. 

– Too many exceptions should not be given. If needed, exceptions should be given only as stated by the vendors. 

– If including, host-based IDS should be enabled on the AV agent. 

Is DLP Dead?

DLP is a technology we use more than one decade. The starting point of DLP was protecting IP (Intellectual Property) of the organizations and became very popular for too many sectors. Organizations spent, and still spending millions of dollars for DLP solutions, to protect their private data. However, Gartner says; “They become an annoying or toothless technical control rather than a component in a powerful information risk management process” about it. But why?  

According to some surveys, the biggest challenge of the professionals is difficulty to keep policies up to date.at rate of business. The others are that inhibition of the employee productivity because of these policies, and limited data visibility. Also, too many false positivies are also very big problem for IT professionals.

If we talk step by step, requiring policies is really one of the biggest problem of DLP solutions, regardless of manufacturer. Before anything else, organizations have to know what data they must protect. For this, they have to know which data is sensitive for the organization. Most of the organizations started their DLP Project without knowledge of their sensitive data. It is very clear that it is impossible to know what the sensitive data is without data classifications. Again, most of the organizations learnt that after implementing the DLP, and started a data classification Project maybe years later. And of course, only starting or implementing a classification Project is not enough to classify the data. It is a very broad and continuous process, needs wide awareness by users.

So, because of this obscurity about their own data, organizations got their policies from others’ experiences, instead of their own needs. Industry experience became very important at this step then. Created and run the policies with hoping they will protect their data.

At the same time, just knowing what to protect is not enough, also you must know how to protect these data. If you do not know which channels can people use to leak data, it is also impossible to protect it. These channels also added the policies according to industry experiences. Even if the Security Risk Management professionals know what if they miss a required policy, they run these with the with the thought of preserving as much as they protect. Everybody knew that this is not enough for protetion all the data, then the slogan became like; “DLP prevents the user from doing wrong things, does not prevent the data leakage against the malicious users.”

One of the other weaknesses of DLP is focusing on content to identify the data. Even if the last features like AI, it determines the file with the content of it, using pattern match (like regex) or exact match. Very limited context examination is used. So, DLP is not effective against malicious users again since the conten can be changed very easily to leak, also in a living organization, the content of the sensitive data will be changed inevitably, and this situation requires that policies are constantly updated. But as I said before, new policies means that more possibility to inhibition of the employee productivity, more spending time to optimize these rules and more exclusions. More exclusions mean more vulnerability against data protection. More context focus is needed to prevent the data.

In big organizations, false positives are can be the biggest problems since number of employees, sensitive data and policies. A large number of incidents produced every day, requires more time, and sure more employee to review these incidents. And if you make a survey with these teams who are viewing the DLP incidents, they could say hundreds of incidents could be ignored. Actually, I believe that it would be a good situation if the organization can catch one or two real incidents in a year. The organization hopes that the captured incidents gets an acceptable ROI. Meanwhile, this organization never can be sure that nobody leaked any data.

 Every IT Professional that used DLP know that there are many other annoying situations of DLP. For example, if you do not want someone leak data using endpoint channel like printer or USB, every PC needs an agent installed, and of course these agents should work as it should. This is a very big challenge against all IT personnel managing endpoint solutions. These requires focusing very strange situations, spending too much time on one PC sometimes, in a case of a problem, and a continuous testing of the agent. Not only incident analysis, also management of the DLP solution requires really many sources.

One last thing I want to mention, DLP inspects only at the point of egress. On the endpoint, printer or USB, in network layer; the internet access and in email channel, the emails sending outside of the organization. Data protection must also include inside the network like file servers. As we saw that the protection at the egress point is difficult and can be possibilities to leak the data (this can be because of policies, an agent with a problem, changing the content of the data, etc.), this item becomes very important.

As the result, DLP is not an efficient solution as expected. It must be continuous process, not a single Project by it is own. Despite all these, I do not believe that DLP will die. At least, in many countries, there are many regulations in different industries, DLP is compulsory. Regulations are requiring to have a DLP solutions including both endpoint, network and email channels. And still we do not have more efficient solution by itself. But, organizations must think to support their DLP solutions with some other solutions like UEBA or DaBA. Especially, DaBA solutions can provide a complete visibility of the movement of the sensitive data, in all over the network. Even if the users do not try to leak data outside the organization (so, it is impossible to catch it then), it is very important to know who is using this data in organization. So, the data can be followed with the need to know approach. If someone does not need a data for his job, he should not reach to this data. UEBA and DaBA solutions can provide this visibility and add a new layer to data protection mechanism.