A threat actor calling herself as ‘0xFF’ advertised a new RAT in HackForums.
According to the threat actor, this new RAT tool is supporting Windows (amd64, i386, arm, arm64), Linux (amd64, i386, arm, arm64), Darwin (MacOS) (amd64(Intel), arm64(m1)) and Android (bin) (amd64, i386, arm, arm64).
This Multi-OS RAT has features below;
– No need to lower AV settings to keep running
– Everything is being automatically compiled for you.
– Remote non-interactive shell
– No need to remember all the different OSes when doing simple tasks
– Downloading files from external server to host
– Uploading files from computer to the tool’s panel
– Taking screenshots (automatic (every x seconds) or manual)
– Custom scripts that can execute different written code on demand in the targeted devices.
– Get notified when devices go online/offline, when a new device connects or command finish executing.
– Custom installer
– Commands on boot and on new connect
The actor also mentioned that they can create postloads for the customer.
It seems like the RAT has several licensing options, and not so expensive. Tools like that make it easier for people without technical knowledge and software skills to carry out attacks on their own. This situation seems to be pushing institutions more and more each day.
Decryption keys for Egregor, Sekhmet and Maze shared by someone claiming to be the developer of all three malware.
The keys were published in BleepingComputer forum. As stated in the forum post, this was a planned leak and is not related to the recent law enforcement against attackers. Again, according to the post, none of their team members will ever return to ransomware attacks and the source code of the malware has been destroyed.
The post was containing a link to download a 7zip file with four archives containing the Maze, Egregor and Sekhmet decryption keys, as well as the source code for the M0yv malware used by the operators. However, because of being malicious, the link removed from post. It may be possible to contact to get them again.
Meanwhile, some experts corrected the decryption keys’ performance.
AT&T Alien Labs last week announced that the source code of BotenaGo malware has been published in GitHub. BotenaGo was discovered and named in November 2021 by Alien Labs again, and according to the post of Alien Labs, the source code of this malware has been published on 16th of October 2021.
It is noticed that too few AV vendors can detect (3/60) this malware already and now it is more dangerous because with the published source code, it is possible to change code simply and create new variants to bypass the detection.
It is also possible to find a source code analysis and IoCs of the malware in the post with recommended actions suggestions.
Modules are typically work in Powershell directly. “Get-Module” command can be used to see imported modules.
“Get-Module -ListAvailable” command show the modules available.
For the additional modules we want to use, we should import them first. Once we import the module, we can use its all commands anymore. We will add PowerSploit as example. PowerSploit project is a project no longer supported but sometimes we may want to use its capabilities. For importing, we firstly download the package from here. After downloading the module, we need to copy it to one of the module folders in PC. There are different module locations and we can see them with “$Env:PSModulePath” command.
We create a folder called PowerSploit and copy all files here from the downloaded package.
“Import-Module PowerSploit” command will install the module and all its commands will be available for us to use.
“Get-Command -Module PowerSploit” command can list all commands of this module.
“Get-Help <command>” command will show you the usage of the commands.
Redline is a free tool for investigation malicious activity through memory and file analysis. It has a lot of features for investigation but in this post, we will only mention searching for IoCs in the endpoint with Redline.
In previous post, we created an IoC to detect WinSCP.exe. Now, we will search it with Redline as the example.
We will go on with “Create an IOC Search Collector” menu in the main page of Redline. For doing this, we browse the folder that including IoCs we want to search in the PC. We have only one IoC here but if you have more IoCs in the folder, you will see all of them in “Indicators” tab.
Then we create a folder for IoC Collector and after clicking “Next” button, we show this folder. Redline creates the IoC Collector in this folder. We will now use RunRedlineAudit.bat file with the command line. Once the bat file finishes running, it will create a folder called “Sessions” and save outputs to this folder in the same directory.
Just run the “RunRedlineAudit.bat” file and wait for finishing. Then, open the “Sessions” folder. Each IoC sweep placed in its own folder calle “AnalysisSessionX”. This was our first sweep, so we click on “AnalysisSession1” folder. Our IoC report will be in “AnalysisSession1.mans” file. So, we click on this file, and it will take some time it generates the report.
When IoC report generated, we can see it on Redline tool, “IOC Reports” tab. As you can see in the screenshot, our WinSCP Indicator IoC got hits. When we click on it, we can see why this IoC got hit. Here, our IoC catch the file with its MD5 hash value and file name. With clicking on “View Details” button, we can see more details about the hit.
An effective threat hunting is critical because it is hard to think like attackers and to search for the unknown in an enterprise network. This post may help organizations for an effective and successful threat hunting.
Knowledge of Topology and Environment
The purpose of threat hunting is to find the anomalies and their sources in the network and endpoint. So, a threat hunter should know what is normal and so can understand what is not normal.
From the risk management point of view, critical assets – servers, applications, data – should be known to protect more effectively. With the knowledge of the environment, the threat hunter knows the critical assets and hunts according to this. If there is segmentation in network, it is also critical to know the network topology and networks – or vlans – of these critical assets.
It is also necessary to know which application is running on which operating system, so the threat hunter can know the weaknesses of the system and can search according to these weaknesses.
Effective Endpoint Management
For threat hunting, the most used tools are EDRs. Organizations should be sure that they installed endpoint security tools to all endpoints and detect when they removed or stopped. Asset management is something more than CMDB. It must be managed by security teams whose understanding the criticality of the lack of endpoint security tool in an endpoint.
Threat intelligence is one of the most important feeds of threat hunting. Threat hunters need to have most recent intelligence and IoCs so they cant perform hunting the latest threats. Many malicious are produced and detected every day. This situation causes a lot of noises about intel. To avoid this noise, hunters should get valuable intelligence about their organization’s sector and geolocation, and integrate these IoCs with SIEM, EDR, etc..
We all know that new tactics and techniques are created by attackers in general. The most important reason for this is while security professionals in organizations have to deal with so many different things, attackers can only focus on their target. Even if it so important to have valuable/experienced personnel, if they are dealing with different organizational missions while they are working, it will be difficult for them to think like an hacker and detect the unknown in the network. Threat hunters should focus to their mission to create their methodology and hunt. So, there should be dedicated personnels for threat hunting.
Coordination Across the Organization
Yes, threat hunters work in a strange mission, think like a hacker and search across the network but they must not work alone. A threat hunter should have a good relationships with key personnel in IT departments like network and system admins, help desk personnel and so on. With these relationships, they better understand the network, systems and more importantly the company’s and personnels’ way of doing business. For organization’s perspective, when a threat hunter finds a weaknesses during the hunting process, they inform critical IT personnel for remediation. This team working will result as a success in remediation phase of the incidents or weaknesses.
Intelligence is critical for hunting for the known threats but hunters should be familiar with the TTPs of the attackers against the zero day threats. Threat hunters also should be aware of the updated or newly TTPs. Only with this knowledge hunters can act like an attacker. TTPs are at the top of Pyramid of Pain (defined by David Bianco).
To disclose the anomaly or malicious activity, threat hunters should use advanced tools like EDR, NDR, SIEM, FIM, etc.. These tools will help hunter to find abnormal activities if configured properly. In different posts, we tried to explain why they should be used.
Although Threat Hunting is nothing new, it is a very hot topic lately. Even if you have perimeter and endpoint security devices and SIEM for collecting and correlating logs from them, it is not a good way to wait for incidents coming. Without Threat Hunting, dwell time is increasing more than 150 days, and this is not acceptable anymore. While attackers are working proactively and developing new techniques day by day, security teams need to be more proactive too. Threat Hunting is the most proactive approach in an organization’s security structure and improves its security posture.
Dwell time is the dirty metric nobody wants to talk about in cyber security. It signifies the amount of time threat actors go undetected in an environment, and the industry stats about it are staggering. Source: Extrahop
In its simplest definition, Threat Hunting is detecting abnormal activities on endpoints and network. But, what to look for hunting threats?
What to Hunt?
Threat Hunting is a continuous process. Hunters should check anything that could be an evidence of an incident.
Processes: Processes are important components of OSs. Adversaries may inject malicious code into hijacked processes. Therefore, hunters should check processes and child processes regularly.
Binaries: Hunters should check binaries with their checksum, name and other specifications.
Network: Network activities to specific destinations and anomalies in network should be checked.
Registery: Hunters should check registery key additions and modifications.
For a continuous hunting, organizations need to have threat hunters in their CSIRT. The difference between analysts and threat hunters is the proactive approach as mentioned before. Also in smaller organizations, SOC analysts may work on threat hunting but actually, a threat hunter may has more specifications than an analyst. In larger organizations, it is important to have a dedicated threat hunting leader and team. This team should has detailed knowledge about;
OSs: The Threat Hunting team should have knowledge about OSs that organization is using. This knowledge must include process structures, files, permissions, and registery depending on the OS. This is important because malicious files and attackers make changes in OS here. A threat hunter need to understand what is normal and what is not. Something is not normal could be a sign of an intrusion. For having this knowledge, baselines could be created for all critical systems. These baselines will help to know the normal and the anomalies.
Apps: Threat Hunters should have knowledge about the applications used in the organization. It is also important to know perimeter and endpoint security devices and applications used by organization.
Business: Threat Hunting team should have knowledge about the organization’s business so they need to follow adversaries working on the organization’s sector and geographical location. It is also important to know third party companies the organization works and communication ways with them.
Network: In a big and segmented network structure, it is important to know where the critical assets are.
TTPs: IoCs are important components for hunting but they provide to detect “known knowns”. TTPs are at the top of Pyramid of Pain (defined by David Bianco) and especially adversaries’ techniques and tactics should be known those are threatening the organization’s sector and location.
Threat Hunting Tools: In CSIRT plan, it needs to be included that which tools and techniques can be used for threat hunting. Threat hunters should have knowledge of these tools and techniques.
IR&H Plan: Threat hunting is only a step of proactive approach. If threat hunters successfully find an intrusion or anomaly in systems, they need to know the next step. Who should they inform? What should be done?..etc..
Threat Intelligence: Threat intelligence is one of the most important feeds of threat hunting. Threat hunters need to have most recent intelligence and IoCs so they cant perform hunting the latest threats.
EDR: Threat hunters need IoCs but also need to know how to use these IoCs. After gathering the most recent IoCs from TI platforms, an IoC sweep must be made on endpoints.
NDR: Just like endpoints, network traffic also need to be checked with the latest IoCs. For doing this, CSIRT need to collect all east-west and south-north network traffic. NDR devices those have AI capabilities also detect anomalies in the network.
SIEM: Depending on the hunt’s scope, the threat hunter may need to check IPS/IDS, proxy, DNS, firewall or some other tools’ logs. Because logs are coming from different sources, CSIRT need to collect and correlate these logs in SIEM and feeding SIEM with the latest IoCs, these logs will more meaningful.
FIM: We said that baselines must be created for critical systems. FIM solutions will help CSIRT to create baselines for OSs and alert analysts when an unauthorized transaction is made.
Last week, a friend called me, gave some bad news about a company. The company was looking for help since they became a victim of Egregor ransomware and trying to learn what to do against attacker since the attacker got their all data, encrypted it and gave three days to be paid 500k dollars. The attacker threatened them to publish their data in public in three days. Meanwhile, the only problem was not that all data would be published publicly, but also they lost all their private data. But, how can it be possible?
Ransomware is the biggest problem of the cyber world for some years. We heard about it, work on it and have seen paid bitcoins too much in these years. There are tens (or maybe hundreds) of webinars, talks and articles about it, trying to help about being safe against ransomware. It is ok while the weakest link is human, it is possible to be exposed a ransomware but it is not too difficult to confine it to a small area.
The company I told about above was a chemical company and of course has too many private data like formulas. I mean they also lost their backups while I am saying they lost all their data. Since, they did not isolate their backup network, their backups was also being encrypted. Meanwhile, they have some backup tapes but cannot use them because they have never tested whether the backup tapes working, and of course they did not when the company need them.
There are some basic prevention steps against ransomware. If we mention briefly, we can say user awareness, regular phishing tests, not only an anti-spam product but also a sandbox or another technology against malicious emails, EDR to response faster against a malicious behavior, NDR to determine the anomaly in the network, to backup data and test these backups regularly, to isolate backup network so infiltrated attackers cannot harm backups, to isolate private data and apply need to know, to limit users’ internet access, and more. the list seems too long but most of them do not require much expenditure. But it if you do not invest to professionals and to any technology, then you just prodigalize your money. However, you can never count lost reputation and also secret formulas and data.
All these measures can take too much. I can understand if a company cannot invest all of them for security. But as I said above, this company’s backup network is not isolated and can be accessed from all other networks. And, as I learnt, they only use an antivirus software but it is not up to date, and I am sure they do not track whether all PCs or servers have this antivirus. So, like these measures, most of them are not expensive. To have these measures at least, every company needs to invest talented security professionals to save money. However, I think, any of these measures cost more than 500k$ + reputation + publicly published private data. To invest security is not wasting money. It is directly saving money. Everyone needs to understand this without living.
With increasing remote workforce process during Covid-19, clients are now more independent with their laptops and mobile phones that also being used for personal usage besides organizational usage. Not only workers, but also computers and data are now outside the organization and most of the protection layers such as firewall and IPS. Vulnerabilities and attacks continue to surface but remote users’ connection and VPN also create troubles for IT teams to patch and protect the clients.
Host-based firewalls help security professionals in many subject. Firstly, it controls the incoming and outgoing traffic, so becomes a very critical defense layers. For example, security pros may want to block all inbound connection to the client host initiated from outside. This is a very basic protection on some kinds of malwares.
Some sensitive applications like Swift, should be isolated from the network in the organization. But sometimes it is not an effective solution to create vlans for a few PCs or servers. For the situations like that, host-based firewalls are true saviors. These sensitive PCs and servers can be isolated with host-based firewalls. Since most of the antivirus agents also includes host-based firewall feature, this solution becomes easier and more logical.
Meanwhile, organizations should not make it a habit. In large organizations having thousands of PCs and servers, this isolations on host-based firewalls can easily turn into a nightmare, since hundreds of rules have to be written. If these rules are not followed properly, also with the circulation in the administration team, a possible mixed rules can harm PCs and network more than they protect. It is inevitable that every rule for each small groups makes the rule list more complex, and it becomes more possible to make a mistake while adding new rules.
So, even if host-based firewalls are important and valuable solutions, they should be used as needed, and should not be a regular solution for isolation situations.
Last week, I hearth that an organization did not add antivirus agent to their PC image. They are formatting the PC with their image, then connecting to the network and waiting for the sccm installing the antivirus software to the PC. Also, for remote users working on the field, some contracted partners are formatting the PCs since these users cannot come to the company, they then join to the network via VPN after formatting and keeps working. Meanwhile, the IT team is waiting for the sccm install the antivirus software, but because of the VPN network, most of the time it fails. PC keeps working on the network for days.
While I was sharing this situation with some friends in the industry, some of them also said that it is a normal process for the organizations. So, I wanted to write this article.
A few months ago, I shared a post about falling of the AV. It is true that AV softwares are not very efficient in recent years. There are many other measures need to be taken to protect the endpoints. However, most of these measures are for APT attacks. As everyone says, and also I touch in the article, attackers’ profile and techniques has changed a lot, since the times AV was popular and successful. But, despite all these situations, nobody can say that AVs are not necessary anymore. Organizations does not face attackers that using highly advanced techniques and tools only. There are still many script kiddies and those trying to learn hacking. These people are always looking for easy vulnerabilities to hack. It is very great possibility they find you.
Another subject about AV, because of the hash databases downloaded, they can protect users for many of the malicious events, also while they are offline, or while they are not connected to the office.
Even, most of the AV softwares are improving themselves with behavioral and AI capabilities. So, these can also detect and stop some of the APT attack phases.
I am also curious your comments, but my opinion is an AV is still indispensable for all organizations. So, I want to some best (must) practices for using AV in an organization;
– An AV software should be installed on all devices. Clients should be periodically followed whether has AV on it or not. If it is possible, a NAC solution should be positioned and PCs that does not have AV should be blocked.
– AV solution should be centrally managed. So, updates can be managed centrally and out of date clients can be followed.
– Administrators should make sure all clients are sending logs properly. It is very important to response a suspicious situation quickly.
– AV software should be updated periodically. Meanwhile, administrators also should be sure that all clients are getting the latest updates properly.
– AV software should be included into the PC and servers’ regular images. When a PC formatted and re-installed, it should include AV before connecting the network.
– Users should not be able to disable the AV services and agent. Tamper protection and an uninstall password should be used and should be stored in a password management system.
– Malicious files should be blocked and quarantined to be analyzed by the administrators.
– Audit logs should be collected properly. Administrators should login to the software only by their own usernames. Generic usernames should not be used.
– Too many exceptions should not be given. If needed, exceptions should be given only as stated by the vendors.
– If including, host-based IDS should be enabled on the AV agent.