stub Threat Intelligence Best-Practice Tips - Unite.AI
Connect with us


Threat Intelligence Best-Practice Tips




A lot of people say threat intelligence (TI) tastes good, but few understand how to cook it. There are even fewer of those who know which processes to engage for TI to work and bring profit. Moreover, a negligible number of people know how to choose a feed provider, where to check a false positives indicator, and whether it’s worthwhile to block a domain that your colleague has sent you over WhatsApp.

We had two commercial APT subscriptions, ten information exchanges, about a dozen free feeds, and an extensive list of TOR exit nodes. We also used a couple of powerful reversers, master Powershell scripts, a Loki scanner and a paid VirusTotal subscription. Not that a security incident response center won’t work without all of these, but if you are up to catching complex attacks you have to go the whole hog.

What I was particularly concerned with was the potential automation of checking for indicators of compromise (IOCs). There’s nothing as immoral as artificial intelligence replacing a human in an activity that requires thinking. However, I realized that my company would encounter that challenge sooner or later as the number of our customers was growing.

For several years of permanent TI activity, I have stepped on a bunch of rakes and I’d like to provide some tips that will help newbies avoid common mistakes.

Tip 1. Don’t set too many hopes on catching stuff by hashes: most malware is polymorphic these days

Threat intelligence data comes in different formats and manifestations. It may include IP addresses of botnet Command and Control centers, email addresses involved in phishing campaigns, and articles on evasion techniques that APT groups are about to start leveraging. Long story short, these can be different things.

In order to sort this whole mess out, David Bianco suggested using what’s called the Pyramid of Pain. It describes a correlation between different indicators that you use to detect an attacker and the amount of “pain” you will cause the attacker if you identify a specific IOC.

For instance, if you know the MD5 hash of the malicious file, it can be easily and accurately detected. However, it won’t cause much pain to the attacker because adding only 1 bit of information to that file will completely change its hash.

Tip 2. Try using the indicators that the attacker will find technically complicated or expensive to change

Anticipating the question of how to find out whether a file with a given hash exists in our enterprise network, I’ll say the following: there are different ways. One of the easiest methods is to use a solution that maintains the database of MD5 hashes of all executable files within the enterprise.

Let’s go back to the Pyramid of Pain. As opposed to detection by a hash value, it’s more productive to identify the attacker’s TTP (tactics, techniques, and procedures). This is harder to do and requires more efforts, but you will inflict more pain to the adversary.

For example, if you know that the APT crew that targets your sector of the economy is sending phishing emails with *.HTA files on board, then creating a detection rule that looks for such email attachments will hit the attacker below the belt. They will have to modify the spamming tactic and perhaps even spend some bucks for buying 0-day or 1-day exploits that aren’t cheap.

Tip 3. Don’t set excessive hopes on detection rules created by someone else, because you have to check these rules for false positives and fine-tune them

As you get down to creating detection rules, there is always a temptation to use readily available ones. Sigma is an example of a free repository. It is a SIEM-independent format of detection methods that allows you to translate rules from Sigma language to ElasticSearch as well as Splunk or ArcSight rules. The repository includes hundreds of rules. It seems like a great thing, but the devil, as always, is in the detail.

Let’s have a look at one of the mimikatz detection rules. This rule detects processes that tried to read the memory of the lsass.exe process. Mimikatz does this when trying to obtain NTLM hashes, and the rule will identify the malware.

However, it is critical for us – experts who don’t only detect but also respond to incidents – to make sure it is actually a malicious actor. Unfortunately, there are numerous legitimate processes that read lsass.exe memory (e.g., some antivirus tools). Therefore, in a real-world scenario, a rule like that will cause more false positives than benefits.

I’m not willing to accuse anyone in this regard – all solutions generate false positives; it’s normal. Nevertheless, threat intelligence specialists need to understand that double-checking and fine-tuning the rules obtained from both open and closed sources is still necessary.

Tip 4. Check domain names and IP addresses for malicious behavior not only at the proxy server and the firewall but also in DNS server logs – and be sure to focus both on successful and failed resolving attempts

Malicious domains and IP addresses are the optimal indicators from the perspective of detection simplicity and the amount of pain that you inflict to the attacker. However, they appear easy to handle only at first sight. At least, you should ask yourself a question where to grab the domain log.

If you restrict your work to checking proxy server logs only, you can miss malicious code that tries to query the network directly or requests a non-existent domain name generated with DGA, not to mention DNS tunneling – none of these will be listed in the logs of a corporate proxy server. Criminals can also use VPN services out there with advanced features or create custom tunnels.

Tip 5. Monitor or block – decide which one to choose only after finding out what kind of indicator you discovered and acknowledging the possible consequences of blocking

Every IT security expert has faced a nontrivial dilemma: to block a threat or monitor its behavior and start investigating once it triggers alerts. Some instructions unambiguously encourage you to choose blocking, but sometimes doing so is a mistake.

If the indicator of compromise is a domain name used by an APT group, don’t block it – start monitoring it instead. The present-day tactics of deploying targeted attacks presuppose the presence of an additional secret connection channel like, for example, cell tracking apps that can only be discovered through in-depth analysis. Automatic blocking will prevent you from finding that channel in this scenario; furthermore, the adversaries will quickly realize that you have noticed their shenanigans.

On the other hand, if the IOC is a domain used by crypto-ransomware, it should be blocked immediately. But don’t forget to monitor all failed attempts to query the blocked domains  – the configuration of the malicious encoder may include several Command and Control server URLs. Some of them may not be in the feeds and therefore won’t be blocked. Sooner or later, the infection will reach out to them to obtain the encryption key that will be instantly used to encrypt the host. The only reliable way to make sure you have blocked all the C&Cs is to reverse the sample.

Tip 6. Check all new indicators for relevance before monitoring or blocking them

Keep in mind that threat data is generated by humans who are prone to error, or by machine learning algorithms that aren’t error-proof either. I have witnessed different providers of paid reports on APT groups’ activity accidentally adding legit samples to the lists of malicious MD5 hashes. Given that even paid threat reports contain low-quality IOCs, those obtained via open-source intelligence should definitely be vetted for relevance. TI analysts don’t always check their indicators for false positives, which means the customer has to do the checking job for them.

For instance, if you have obtained an IP address used by a new iteration of TrickBot, before leveraging it in your detection systems, you should ascertain that it’s not part of a hosting service or one emanating from your IP. Otherwise, you will have a hard time dealing with numerous false positives whenever users visiting a site residing on that hosting platform go to completely benign web pages.

Tip 7. Automate all threat data workflows to the maximum. Start with fully automating false positives checkup via a warning list while instructing the SIEM to monitor the IOCs that don’t trigger false positives

In order to avoid a large number of false positives related to intelligence and obtained from open sources, you can run a preliminary search for these indicators in warnings lists. To create these lists, you can use the top 1000 websites by traffic, addresses of internal subnets, as well as the domains used by major service providers like Google, Amazon AWS, MS Azure and others. It’s also a great idea to implement a solution that dynamically changes warnings lists consisting of the top domains / IP addresses that the company employees have accessed during the past week or month.

Creating these warning lists can be problematic for a medium-sized SOC, so it makes sense to consider adopting so-called threat intelligence platforms.

Tip 8. Scan the entire enterprise for host indicators, not only the hosts connected to SIEM

As a rule, not all hosts in an enterprise are plugged into SIEM. Therefore, it’s impossible to check them for a malicious file with a specific name or path by only using the standard SIEM functionality. You can take care of this issue in the following ways:

  1. Use IOC scanners such as Loki. You can use SCCM to launch it on all enterprise hosts and then forward the results to a shared network folder.
  2. Use vulnerability scanners. Some of them have compliance modes allowing you to check the network for a specific file in a specific path.
  3. Write a Powershell script and run it via WinRM.

As mentioned above, this article isn’t intended to be a comprehensive knowledge base on how to do threat intelligence right. Judging from our experience, though, following these simple rules will allow newbies to avoid critical mistakes while handling different indicators of compromise.

Alex is a cybersecurity researcher with over 20 years of experience in malware analysis. He has strong malware removal skills, and he writes for numerous security-related publications to share his security experience.