Reliable Endpoint Monitoring Is Not Possible With Narrow Indicators Of Compromise – Chuck Leaver

Presented By Chuck Leaver And Written By Dr Al Hartmann Of Ziften Inc.

 

The Breadth Of The Indicator – Broad Versus Narrow

A thorough report of a cyber attack will normally supply information of indicators of compromise. Frequently these are narrow in their scope, referencing a specific attack group as viewed in a specific attack on an organization for a restricted amount of time. Typically these slim indicators are specific artifacts of an observed attack that could constitute specific evidence of compromise on their own. For the particular attack it means that they have high uniqueness, however frequently at the cost of low sensitivity to similar attacks with various artifacts.

Essentially, narrow indicators offer really minimal scope, and it is the factor that they exist by the billions in enormous databases that are continually expanding of malware signatures, network addresses that are suspicious, harmful registry keys, file and packet content snippets, filepaths and invasion detection guidelines and so on. The continuous endpoint monitoring system provided by Ziften aggregates some of these third party databases and risk feeds into the Ziften Knowledge Cloud, to take advantage of understood artifact detection. These detection elements can be used in real time in addition to retrospectively. Retrospective application is essential given the short-term qualities of these artifacts as hackers continually render obscure the info about their cyber attacks to annoy this slim IoC detection approach. This is the reason that a continuous monitoring service should archive monitoring results for a long period of time (in relation to industry reported normal hacker dwell times), to provide an enough lookback horizon.

Slim IoC’s have substantial detection worth however they are largely inefficient in the detection of new cyber attacks by proficient hackers. New attack code can be pre tested against typical business security solutions in laboratory environments to confirm non-reuse of artifacts that are noticeable. Security products that operate simply as black/white classifiers experience this weak point, i.e. by offering an explicit determination of destructive or benign. This method is very easily averted. The defended company is likely to be completely hacked for months or years before any detectable artifacts can be recognized (after extensive examination) for the specific attack instance.

In contrast to the convenience with which cyber attack artifacts can be obscured by typical hacker toolkits, the particular methods and strategies – the modus operandi – used by hackers have actually endured over numerous decades. Common methods such as weaponized websites and docs, brand-new service setup, vulnerability exploitation, module injection, delicate folder and pc registry area adjustment, new set up tasks, memory and drive corruption, credentials compromise, harmful scripting and many others are broadly typical. The proper usage of system logging and monitoring can detect a great deal of this characteristic attack activity, when appropriately combined with security analytics to concentrate on the highest hazard observations. This entirely removes the chance for hackers to pre test the evasiveness of their destructive code, considering that the quantification of threats is not black and white, however nuanced shades of gray. In particular, all endpoint risk is varying and relative, across any network/ user environment and period of time, and that environment (and its temporal dynamics) can not be replicated in any laboratory environment. The basic hacker concealment methodology is foiled.

In future posts we will analyze Ziften endpoint risk analysis in more detail, along with the important relationship between endpoint security and endpoint management. “You can’t protect what you do not manage, you can’t manage what you do not measure, you can’t measure what you do not track.” Organizations get breached due to the fact that they have less oversight and control of their endpoint environment than the cyber attackers have. Look out for future posts…

Chuck Leaver – Continuous Endpoint Monitoring And The Carbanak Case Study Part 3

Presented By Chuck Leaver And Written By Dr Al Hartmann

 

Part 3 in a 3 part series

 

Below are excerpts of Indicators of Compromise (IoC) from the technical reports on the Anunak/Carbanak APT attacks, with discussions their discovery by the Ziften continuous endpoint monitoring service. The Ziften service has a concentrates on generic indicators of compromise that have actually been consistent for decades of hacker attacks and cyber security experience. IoC’s can be identified for any operating system such as Linux, OS X and Windows. Particular indicators of compromise likewise exist that show C2 infrastructure or particular attack code instances, however these are not used long term and not generally used again in fresh attacks. There are billions of these artifacts in the security world with thousands being added every day. Generic IoC’s are embedded for the supported os by the Ziften security analytics, and the particular IoC’s are used by the Ziften Knowledge Cloud from subscriptions to a variety of market risk feeds and watch lists that aggregate these. These both have value and will help in the triangulation of attack activity.

1. Exposed vulnerabilities

Excerpt: All observed cases used spear phishing e-mails with Microsoft Word 97– 2003 (. doc) files attached or CPL files. The doc files exploit both Microsoft Office (CVE-2012-0158 and CVE-2013-3906) and Microsoft Word (CVE- 2014-1761).

Comment: Not really a IoC, critical exposed vulnerabilities are a significant hacker exploit and is a large red flag that increases the risk score (and the SIEM priority) for the end point, particularly if other indicators are likewise present. These vulnerabilities are signs of lazy patch management and vulnerability lifecycle management which causes a weakened cyber defense position.

2. Locations That Are Suspect

Excerpt: Command and Control (C2) servers located in China have actually been determined in this campaign.

Remark: The geolocation of endpoint network touches and scoring by geography both contribute to the threat score that drives up the SIEM priority. There are authorized reasons for having contact with Chinese servers, and some companies might have sites located in China, however this should be validated with spatial and temporal checking of anomalies. IP address and domain info ought to be added with a resulting SIEM alarm so that SOC triage can be carried out rapidly.

3. Binaries That Are New

Excerpt: Once the remote code execution vulnerability is effectively exploited, it sets up Carbanak on the victim’s system.

Remark: Any brand-new binaries are constantly suspicious, however not all of them should be alerted. The metadata of images need to be examined to see if there is a pattern, for example a brand-new app or a brand-new variation of an existing app from an existing vendor on a likely file path for that vendor etc. Hackers will attempt to spoof apps that are whitelisted, so signing data can be compared in addition to size, file size and filepath etc to filter out obvious circumstances.

4. Uncommon Or Delicate Filepaths

Excerpt: Carbanak copies itself into “% system32% com” with the name “svchost.exe” with the file attributes: system, concealed and read-only.

Comment: Any writing into the System32 filepath is suspicious as it is a sensitive system directory, so it undergoes analysis by checking abnormalities instantly. A traditional anomaly would be svchost.exe, which is a crucial system process image, in the uncommon place the com subdirectory.

5. New Autostarts Or Services

Excerpt: To make sure that Carbanak has autorun privileges the malware develops a new service.

Remark: Any autostart or new service prevails with malware and is constantly examined by the analytics. Anything low prevalence would be suspicious. If examining the image hash versus market watchlists results in an unknown quantity to the majority of antivirus engines this will raise suspicions.

6. Low Prevalence File In High Prevalence Folder

Excerpt: Carbanak creates a file with a random name and a.bin extension in %COMMON_APPDATA% Mozilla where it stores commands to be carried out.

Comment: This is a classic example of “one of these things is not like the other” that is simple for the security analytics to inspect (continuous monitoring environment). And this IoC is absolutely generic, has absolutely nothing to do with which filename or which directory is created. Despite the fact that the technical security report notes it as a particular IoC, it is trivially genericized beyond Carabanak to future attacks.

7. Suspect Signer

Excerpt: In order to render the malware less suspicious, the most recent Carbanak samples are digitally signed

Comment: Any suspect signer will be treated as suspicious. One case was where a signer supplies a suspect anonymous gmail email address, which does not inspire confidence, and the risk score will rise for this image. In other cases no email address is provided. Signers can be quickly noted and a Pareto analysis carried out, to recognize the more versus less trusted signers. If a less trusted signer is discovered in a more sensitive folder then this is really suspicious.

8. Remote Administration Tools

Excerpt: There appears to be a preference for the Ammyy Admin remote administration tool for remote control believed that the hackers used this remote administration tool due to the fact that it is frequently whitelisted in the victims’ environments as a result of being used regularly by administrators.

Comment: Remote admin tools (RAT) always raise suspicions, even if they are whitelisted by the company. Checking of abnormalities would occur to identify whether temporally or spatially each brand-new remote admin tool is consistent. RAT’s are subject to abuse. Hackers will constantly prefer to use the RAT’s of a company so that they can avoid detection, so they must not be given access each time just because they are whitelisted.

9. Patterns Of Remote Login

Excerpt: Logs for these tools suggest that they were accessed from two different IPs, most likely used by the hackers, and located in Ukraine and France.

Comment: Constantly suspect remote logins, due to the fact that all hackers are presumed to be remote. They are also used a lot with insider attacks, as the insider does not wish to be identified by the system. Remote addresses and time pattern anomalies would be checked, and this should reveal low prevalence usage (relative to peer systems) plus any suspect geography.

10. Atypical IT Tools

Excerpt: We have actually likewise discovered traces of various tools utilized by the hackers inside the victim ´ s network to gain control of additional systems, such as Metasploit, PsExec or Mimikatz.

Comment: Being sensitive apps, IT tools ought to constantly be examined for abnormalities, because lots of hackers overturn them for harmful functions. It is possible that Metasploit could be used by a penetration tester or vulnerability scientist, but instances of this would be uncommon. This is a prime example where an uncommon observation report for the vetting of security staff would lead to restorative action. It also highlights the problem where blanket whitelisting does not help in the identification of suspicious activity.

 

Part Two Of The Carbanak Case Study Reveals Why Continuous Monitoring Of Endpoints Is So Effective – Chuck Leaver

Presented By Chuck Leaver And Written By Dr Al Hartmann

 

Part 2 in a 3 part series

 

Continuous Endpoint Monitoring Is Really Effective

 

Capturing and blocking malicious scripts before it has the ability to jeopardize an endpoint is great. However this technique is largely inadequate against cyber attacks that have actually been pre checked to avert this sort of method to security. The genuine issue is that these hidden attacks are conducted by skilled human hackers, while conventional defense of the endpoint is an automatic procedure by endpoint security systems that rely largely on standard anti-virus technology. The intelligence of human beings is more innovative and versatile than the intelligence of machines and will always be superior to automatic machine defenses. This highlights the findings of the Turing test, where automated defenses are trying to rise to the intellectual level of an experienced human hacker. At present, artificial intelligence and machine learning are not sophisticated enough to completely automate cyber defense, the human hacker is going to win, while those infiltrated are left counting their losses. We are not residing in a science fiction world where machines can out think people so you must not think that a security software suite will automatically take care of all your problems and avoid all attacks and data loss.

The only genuine way to prevent a resolute human hacker is with an undaunted human cyber protector. In order to engage your IT Security Operations Center (SOC) personnel to do this, they need to have full visibility of network and endpoint operations. This type of visibility will not be attained with standard endpoint anti-viruses solutions, instead they are designed to stay quiet unless implementing a capture and quarantining malware. This conventional approach renders the endpoints opaque to security workers, and the hackers utilize this endpoint opacity to conceal their attacks. This opacity extends backwards and forwards in time – your security personnel have no idea exactly what was running across your endpoint population previously, or at this moment, or what can be expected in the future. If persistent security personnel find hints that need a forensic look back to discover attacker traits, your anti-viruses suite will be not able to assist. It would not have actually acted at the time so no events will have been recorded.

On the other hand, continuous endpoint monitoring is constantly working – supplying real time visibility into endpoint operations, providing forensic look back’s to take action against brand-new proof of attacks that is emerging and discover indications earlier, and offering a baseline for regular patterns of operation so that it understands exactly what to anticipate and notify any irregularities in the future. Providing not only visibility, continuous endpoint monitoring provides informed visibility, with the application of behavioral analytics to detect operations that appear irregular. Irregularities will be continually analyzed and aggregated by the analytics and reported to SOC staff, through the organization’s security information event management (SIEM) network, and will flag the most worrying suspicious abnormalities for security workers attention and action. Continuous endpoint monitoring will magnify and scale human intelligence and not replace it. It is a bit like the old game on Sesame Street “One of these things is not like the other.”

A child can play this game. It is simple due to the fact that a lot of items (known as high prevalence) look like each other, but one or a small amount (known as low prevalence) are not the same and stand apart. These dissimilar actions taken by cyber criminals have actually been quite constant in hacking for decades. The Carbanak technical reports that noted the indicators of compromise ready examples of this and will be gone over below. When continuous endpoint monitoring security analytics are enacted and reveal these patterns, it is basic to acknowledge something suspicious or unusual. Cyber security personnel will be able to perform rapid triage on these abnormal patterns, and quickly figure out a yes/no/maybe reaction that will identify uncommon but known to be good activities from destructive activities or from activities that need additional tracking and more informative forensics examinations to validate.

There is no way that a hacker can pre test their attacks when this defense application remains in place. Continuous endpoint monitoring security has a non-deterministic risk analytics part (that informs suspect activity) as well as a non-deterministic human component (that performs alert triage). Depending on the existing activities, endpoint population mix and the experience of the cyber security personnel, developing attack activity may or may not be discovered. This is the nature of cyber warfare and there are no guarantees. But if your cyber security fighters are geared up with continuous endpoint monitoring analytics and visibility they will have an unjust advantage.

The Case For Continuous Endpoint Monitoring Part One Of The Carbanak Case Study – Chuck Leaver

Presented By Chuck Leaver And Written By Dr Al Hartmann

 

Part 1 in a 3 part series

 

Carbanak APT Background Particulars

A billion dollar bank raid, which is targeting more than a hundred banks throughout the world by a group of unknown cyber wrongdoers, has been in the news. The attacks on the banks began in early 2014 and they have actually been expanding around the world. The majority of the victims suffered devastating infiltrations for a variety of months across a number of endpoints prior to experiencing financial loss. Most of the victims had implemented security steps which included the implementation of network and endpoint security software, but this did not offer a lot of caution or defense against these cyber attacks.

A variety of security companies have actually produced technical reports about the attacks, and they have been codenamed either Carbanak or Anunak and these reports listed indicators of compromise that were observed. The businesses include:

Fox-IT of Holland
Group-IB from Russia
Kaspersky Lab of Russia

This post will act as a case study for the cyber attacks and address:

1. The factor that the endpoint security and the standard network security was unable to detect and defend against the attacks?
2. Why continuous endpoint monitoring (as provided by the Ziften solution) would have warned early about endpoint attacks and then activated a reaction to prevent data loss?

Traditional Endpoint Security And Network Security Is Inadequate

Based on the legacy security design that relies excessively on blocking and prevention, standard endpoint and network security does not provide a well balanced strategy of obstructing, prevention, detection and response. It would not be challenging for any cyber criminal to pre test their attacks on a limited number of traditional endpoint security and network security services so that they could be sure an attack would not be detected. A number of the hackers have actually investigated the security services that were in place at the victim companies and then became experienced in breaking through undetected. The cyber crooks knew that most of these security services only respond after the event however otherwise will do nothing. Exactly what this means is that the typical endpoint operation remains primarily nontransparent to IT security workers, which indicates that malicious activity ends up being masked (this has actually already been checked by the hackers to prevent detection). After a preliminary breach has occurred, the malicious software application can extend to reach users with higher privileges and the more sensitive endpoints. This can be quickly accomplished by the theft of credentials, where no malware is required, and traditional IT tools (which have actually been white listed by the victim organization) can be utilized by cyber criminal created scripts. This means that the existence of malware that can be identified at endpoints is not made use of and there will be no red flags raised. Standard endpoint security software is too over reliant on searching for malware.

Conventional network security can be controlled in a similar way. Hackers test their network activities first to prevent being spotted by widely distributed IDS/IPS guidelines, and they carefully monitor regular endpoint operation (on endpoints that have been compromised) to hide their activities on a network within typical transaction durations and typical network traffic patterns. A new command and control infrastructure is created that is not registered on network address blacklists, either at the IP or domain levels. There is very little to give the hackers away here. However, more astute network behavioral assessment, specifically when related to the endpoint context which will be discussed later in this series of posts, can be a lot more effective.

It is not time to give up hope. Would continuous endpoint monitoring (as offered by Ziften) have supplied an early warning of the endpoint hacking to begin the process of stopping the attacks and avoid data loss? Find out more in part two.