Second Part Of Why Edit Difference Is Important – Chuck Leaver

Written By Jesse Sampson And Presented By Chuck Leaver CEO Ziften

 

In the first post on edit distance, we took a look at hunting for destructive executables with edit distance (i.e., the number of character modifications it takes to make two text strings match). Now let’s look at how we can use edit distance to look for destructive domains, and how we can build edit distance functions that can be combined with other domain name features to pinpoint suspicious activity.

Case Study Background

Exactly what are bad actors trying to do with malicious domains? It may be merely using a similar spelling of a common domain name to fool reckless users into looking at ads or getting adware. Legitimate sites are gradually picking up on this strategy, in some cases called typo squatting.

Other harmful domains are the product of domain name generation algorithms, which can be used to do all kinds of nefarious things like avert countermeasures that obstruct known compromised websites, or overwhelm domain name servers in a dispersed DOS attack. Older versions use randomly-generated strings, while further advanced ones include tricks like injecting typical words, further confusing defenders.

Edit distance can aid with both use cases: here we will find out how. Initially, we’ll omit typical domains, because these are usually safe. And, a list of regular domain names offers a baseline for finding anomalies. One excellent source is Quantcast. For this discussion, we will adhere to domain names and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleansing, we compare each candidate domain name (input data observed in the wild by Ziften) to its prospective neighbors in the exact same top level domain (the last part of a domain name – classically.com,. org, and so on now can be almost anything). The fundamental job is to discover the nearest neighbor in terms of edit distance. By finding domains that are one step away from their nearby next-door neighbor, we can quickly find typo-ed domains. By finding domain names far from their neighbor (the normalized edit distance we presented in Part 1 is useful here), we can likewise find anomalous domain names in the edit distance area.

Exactly what were the Outcomes

Let’s take a look at how these results appear in real life. Take care browsing to these domains considering that they might include malicious content!

Here are a few prospective typos. Typo-squatters target well known domain names considering that there are more chances someone will go to them. Several of these are suspicious according to our risk feed partners, but there are some false positives as well with charming names like “wikipedal”.

ed2-1

Here are some weird looking domain names far from their next-door neighbors.

ed2-2

So now we have actually produced two helpful edit distance metrics for searching. Not only that, we have 3 features to potentially add to a machine-learning design: rank of closest next-door neighbor, range from next-door neighbor, and edit distance 1 from next-door neighbor, showing a danger of typo tricks. Other features that might be utilized well with these include other lexical features such as word and n-gram distributions, entropy, and the length of the string – and network functions like the number of unsuccessful DNS requests.

Streamlined Code that you can Play Around with

Here is a simplified version of the code to play with! Created on HP Vertica, but this SQL should work on many sophisticated databases. Keep in mind the Vertica editDistance function might vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

Environments That Are Inadequately Managed Cannot Be Totally Secure – Chuck Leaver

Written by Chuck Leaver Ziften CEO

 

If your enterprise computing environment is not properly managed there is no way that it can be absolutely safe and secure. And you cannot effectively manage those intricate enterprise systems unless there’s a strong feeling that they are secure.

Some might call this a chicken-and-egg circumstance, where you don’t know where to begin. Should you start with security? Or should you begin with system management? That’s the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter first. Instead, both are mixed together – and treated as a single scrumptious reward.

Many companies, I would argue too many organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO team have no idea each other, talk to each other only when definitely necessary, have distinct budget plans, certainly have different goals, read different reports, and utilize various management platforms. On a day-to-day basis, what makes up a task, a concern or an alert for one team flies completely under the other group’s radar.

That’s not good, because both the IT and security groups should make presumptions. The IT team believes that all assets are safe and secure, unless somebody tells them otherwise. For instance, they assume that devices and applications have not been compromised, users have actually not escalated their privileges, and so on. Likewise, the security team presumes that the servers, desktops, and mobiles are working effectively, operating systems and apps are up to date, patches have been applied, etc

Given that the CIO and CISO teams aren’t talking with each other, do not comprehend each others’ functions and goals, and aren’t using the very same tools, those presumptions might not be correct.

And once again, you can’t have a protected environment unless that environment is appropriately managed – and you cannot manage that environment unless it’s protected. Or to put it another way: An environment that is not secure makes anything you carry out in the IT organization suspect and unimportant, and implies that you cannot understand whether the details you are seeing are correct or controlled. It might all be phony news.

How to Bridge the IT / Security space

Ways to bridge that space? It sounds simple but it can be challenging: Guarantee that there is an umbrella covering both the IT and security groups. Both IT and security report to the same person or organization somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the company does not have a secure environment, and there’s a breach, the worth of the brand name and the company may be lowered to absolutely nothing. Likewise, if the users, devices, infrastructure, application, and data aren’t managed well, the company cannot work successfully, and the value drops. As we’ve talked about, if it’s not properly managed, it cannot be secured, and if it’s not protected, it cannot be well handled.

The fiduciary responsibility of senior executives (like the CFO) is to secure the value of company assets, which means making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of responsibility.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security capabilities. No, it’s a Peanut Butter Cup, created similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that provides IT teams exactly what they require to do their tasks, and provides security teams exactly what they require also – without coverage spaces that could weaken assumptions about the state of enterprise security and IT management.

We have to make sure that our business’s IT infrastructure is developed on a safe and secure structure – and also that our security is executed on a well managed base of hardware, infrastructure, software applications and users. We can’t run at peak efficiency, and with full fiduciary duty, otherwise.

Endpoint Devices Being Offline Doesn’t Mean They Shouldn’t Be Tracked – Chuck Leaver

Written By Roark Pollock And Presented By Chuck Leaver Ziften CEO

 

A survey recently finished by Gallup found that 43% of US citizens that were in employment worked from another location for some of their work time in 2016. Gallup, who has been surveying telecommuting trends in the USA for practically a decade, continues to see more workers working outside of standard workplaces and more of them doing this for more days out of the week. And, of course the variety of connected devices that the typical staff member uses has increased also, which assists drive the convenience and preference of working far from the office.

This mobility undoubtedly produces better employees, and one hopes more productive staff members, however the complications that these trends represent for both security and systems operations teams ought to not be overlooked. IT asset discovery, IT systems management, and risk detection and response functions all benefit from real-time and historical visibility into user, device, application, and network connection activity. And to be truly efficient, endpoint visibility and tracking need to work regardless of where the user and device are operating, be it on the network (regional), off the network however connected (remotely), or detached (not online). Present remote working patterns are increasingly leaving security and operational teams blind to potential concerns and hazards.

The mainstreaming of these trends makes it even more difficult for IT and security teams to restrict what was before deemed higher risk user habits, such as working from a coffeehouse. But that ship has actually sailed and today systems management and security teams need to have the ability to thoroughly monitor device, network activity, user and application, detect abnormalities and inappropriate actions, and implement appropriate action or fixes despite whether an endpoint is locally connected, remotely connected, or detached.

Furthermore, the fact that numerous staff members now frequently gain access to cloud based applications and assets, and have back-up USB or network attached storage (NAS) drives at home additionally amplifies the requirement for endpoint visibility. Endpoint controls typically supply the one and only record of remote activity that no longer necessarily terminates in the business network. Offline activity presents the most severe example of the need for continuous endpoint monitoring. Clearly network controls or network tracking are of negligible use when a device is operating offline. The installation of an appropriate endpoint agent is critical to make sure the capture of all important security and system data.

As an example of the kinds of offline activities that may be found, a customer was just recently able to monitor, flag, and report unusual habits on a business laptop. A high level executive moved large quantities of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent had the ability to collect this behavioral data throughout this offline period, the client had the ability to see this uncommon action and follow-up properly. Continuing to monitor the device, applications, and user behaviors even when the endpoint was detached, gave the customer visibility they never had before.

Does your organization have constant tracking and visibility when employee endpoints are not connected? If so, how do you achieve this?

The Consequences You Can Expect From Machine Learning – Chuck Leaver

Written By Roark Pollock And Presented By Ziften CEO Chuck Leaver

 

If you are a student of history you will observe lots of examples of serious unexpected consequences when new technology has actually been presented. It frequently surprises people that new technologies may have wicked intentions as well as the positive intentions for which they are brought to market but it occurs all the time.

For instance, Train robbers using dynamite (“You think you utilized adequate Dynamite there, Butch?”) or spammers using e-mail. More recently the use of SSL to conceal malware from security controls has actually ended up being more common because the genuine use of SSL has made this technique better.

Due to the fact that brand-new technology is typically appropriated by bad actors, we have no need to think this will not be true about the new generation of machine-learning tools that have reached the marketplace.

To what degree will these tools be misused? There are most likely a number of ways that attackers could use machine-learning to their advantage. At a minimum, malware writers will evaluate their new malware against the new class of advanced hazard protection solutions in a bid to customize their code to ensure that it is less likely to be flagged as harmful. The effectiveness of protective security controls always has a half-life due to adversarial learning. An understanding of artificial intelligence defenses will assist opponents be more proactive in minimizing the efficiency of artificial intelligence based defenses. An example would be an enemy flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being built from that traffic. The objective of the assailant would be to deceive the protector’s artificial intelligence tool into misclassifying traffic or to create such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely likewise be utilized as an attack tool by enemies. For instance, some scientists predict that enemies will utilize machine learning techniques to hone their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to tailor a social engineering attack is especially uncomfortable provided the effectiveness of spear phishing. The capability to automate mass personalization of these attacks is a powerful economic reward for hackers to embrace the techniques.

Expect breaches of this type that provide ransomware payloads to rise greatly in 2017.

The need to automate jobs is a significant driver of investment choices for both aggressors and protectors. Artificial intelligence guarantees to automate detection and response and increase the functional tempo. While the innovation will progressively end up being a standard component of defense in depth techniques, it is not a magical solution. It must be understood that opponents are actively working on evasion techniques around artificial intelligence based detection solutions while likewise using artificial intelligence for their own attack functions. This arms race will require defenders to significantly achieve incident response at machine speed, further worsening the requirement for automated incident response capabilities.