UK Parliament Make Your System Secure Instead Of Blaming Others – Chuck Leaver

Written By Dr Al Hartmann And Presented By Ziften CEO Chuck Leaver

 

In cyberspace the sheep get shorn, chumps get chewed, dupes get duped, and pawns get pwned. We have actually seen another excellent example of this in the current attack on the UK Parliament email system.

Rather than admit to an email system that was insecure by design, the main declaration read:

Parliament has strong procedures in place to secure all our accounts and systems.

Of course you do. The one protective procedure we did see in action was deflecting the blame – pin it on the Russians, that constantly works, while accusing the victims for their policy offenses. While details of the attack are scarce, combing various sources does assist to put together at least the gross outlines. If these descriptions are reasonably close, the United Kingdom Parliament email system failings are shocking.

What failed in this scenario?

Count on single factor authentication

“Password security” is an oxymoron – anything password secured alone is insecure, that’s it, no matter the strength of the password. Please, no 2FA here, may restrain attacks.

Do not enforce any limit on unsuccessful login efforts

Assisted by single aspect authentication, this enables simple brute force attacks, no ability needed. However when violated, blame elite foreign hackers – no one can verify.

Do not implement brute force violation detection

Permit opponents to carry out (otherwise trivially detectable) brute force violations for extended durations (12 hours against the United Kingdom Parliament system), to maximize account compromise scope.

Do not implement policy, treat it as merely suggestions

Combined with single element authentication, no limitation on failed logins, and no brute force attack detection, do not impose any password strength recognition. Offer attackers with extremely low hanging fruit.

Count on anonymous, unencrypted email for sensitive communications

If attackers do prosper in jeopardizing email accounts or sniffing your network traffic, offer a lot of opportunity for them to score high worth message material entirely withput obstruction. This likewise conditions constituents to trust easily spoofable e-mail from Parliament, producing an ideal constituent phishing environment.

Lessons learned

In addition to adding “Common Sense for Dummies” to their summertime reading lists, the United Kingdom Parliament e-mail system admin might wish to take more actions. Enhancing weak authentication practices, imposing policies, improving network and end point visibility with constant tracking and anomaly detection, and completely reassessing protected messaging are suggested actions. Penetration testing would have uncovered these fundamental weak points while staying outside the news headlines.

Even a few intelligent high schoolers with a totally free weekend might have replicated this violation. And lastly, stop blaming Russia for your own security failings. Presume that any weaknesses in your security architecture and policy framework will be probed and made use of by some hackers somewhere throughout the global internet. Even more incentive to find and repair those weaknesses prior to the enemies do, so take action now. Then if your defenders don’t cannot see the attacks in progress, update your tracking and analytics.

Want To Bring Security And IT Together? Use SysSecOps – Chuck Leaver

Written By Chuck Leaver Ziften CEO

 

It was nailed by Scott Raynovich. Having actually dealt with numerous organizations he realized that one of the greatest difficulties is that security and operations are 2 distinct departments – with drastically different objectives, varying tools, and varying management structures.

Scott and his analyst firm, Futuriom, recently finished a research study, “Endpoint Security and SysSecOps: The Growing Pattern to Develop a More Secure Enterprise”, where one of the crucial findings was that contrasting IT and security goals hamper specialists – on both teams – from attaining their objectives.

That’s precisely what our company believe at Ziften, and the term that Scott produced to discuss the merging of IT and security in this domain – SysSecOps – explains completely exactly what we’ve been talking about. Security groups and the IT teams must get on the same page. That implies sharing the same objectives, and in many cases, sharing the very same tools.

Think about the tools that IT individuals use. The tools are designed to ensure the infrastructure and end devices are working correctly, when something goes wrong, helps them fix it. On the endpoint side, those tools help make sure that devices that are permitted onto the network, are configured properly, have software applications that are authorized and effectively updated/patched, and have not registered any faults.

Think about the tools that security folks utilize. They work to enforce security policies on devices, infrastructure, and security apparatus (like firewalls). This might include active tracking incidents, scanning for irregular habits, taking a look at files to ensure they don’t contain malware, embracing the current risk intelligence, matching against newly found zero-days, and performing analysis on log files.

Finding fires, fighting fires

Those are 2 different worlds. The security teams are fire spotters: They can see that something bad is occurring, can work rapidly to isolate the issue, and determine if damage happened (like data exfiltration). The IT groups are on the ground firefighters: They jump into action when an event strikes to ensure that the systems are made safe and restored into operation.

Sounds excellent, doesn’t it? Regrettably, all frequently, they don’t speak to each other – it’s like having the fire spotters and fire fighters utilizing dissimilar radios, different lingo, and dissimilar city maps. Worse, the teams can’t share the very same data directly.

Our method to SysSecOps is to supply both the IT and security groups with the exact same resources – which suggests the exact same reports, provided in the suitable ways to experts. It’s not a dumbing down, it’s working smarter.

It’s ridiculous to work in any other way. Take the WannaCry infection, for example. On one hand, Microsoft released a patch back in March 2017 that dealt with the underlying SMB defect. IT operations groups didn’t install the patch, because they didn’t think this was a big deal and didn’t speak with security. Security groups didn’t understand if the patch was set up, because they don’t talk with operations. SysSecOps would have had everyone on the same page – and could have potentially avoided this issue.

Missing out on data means waste and risk

The inefficient space in between IT operations and security exposes organizations to threats. Preventable threats. Unneeded threats. It’s simply undesirable!

If your organization’s IT and security groups aren’t on the same page, you are incurring risks and expenses that you shouldn’t need to. It’s waste. Organizational waste. It’s wasteful because you have a lot of tools that are supplying partial data that have spaces, and each of your groups only sees part of the picture.

As Scott concluded in his report, “Collaborated SysSecOps visibility has actually currently proven its worth in assisting organizations assess, analyze, and prevent substantial dangers to the IT systems and endpoints. If these goals are pursued, the security and management threats to an IT system can be significantly lessened.”

If your groups are interacting in a SysSecOps type of method, if they can see the exact same data at the same time, you not just have better security and more effective operations – but also lower danger and lower expenses. Our Zenith software application can help you achieve that effectiveness, not only dealing with your existing IT and security tools, but also filling in the gaps to make sure everyone has the best data at the correct time.

With Ziften And Splunk You Can Detect And Respond To WannCry – Chuck Leaver

Written by Joel Ebrahami and presented by Chuck Leaver

 

WannaCry has generated a lot of media attention. It might not have the enormous infection rates that we have actually seen with much of the previous worms, but in the current security world the quantity of systems it had the ability to infect in one day was still rather shocking. The goal of this blog is NOT to provide an in-depth analysis of the threat, however rather to look how the exploit behaves on a technical level with Ziften’s Zenith platform and the combination we have with our technology partner Splunk.

Visibility of WannaCry in Ziften Zenith

My very first action was to connect to Ziften Labs threat research team to see what details they might provide to me about WannaCry. Josh Harriman, VP of Cyber Security Intelligence, directs our research study group and informed me that they had samples of WannaCry currently running in our ‘Red Lab’ to take a look at the habits of the threat and perform additional analysis. Josh sent me over the details of what he had actually found when examining the WannaCry samples in the Ziften Zenith console. He sent over those details, which I present herein.

The Red Laboratory has systems covering all the most typical os with different services and configurations. There were currently systems in the laboratory that were purposefully vulnerable to the WannaCry exploit. Our international hazard intelligence feeds utilized in the Zenith platform are upgraded in real-time, and had no trouble spotting the infection in our lab environment (see Figure 1).

Two laboratory systems have been determined running the destructive WannaCry sample. While it is great to see our international danger intelligence feeds updated so rapidly and recognizing the ransomware samples, there were other habits that we found that would have identified the ransomware threat even if there had not been a danger signature.

Zenith agents collect a huge amount of data on what’s happening on each host. From this visibility info, we develop non-signature based detection techniques to take a look at generally harmful or anomalous habits. In Figure 2 shown below, we reveal the behavioral detection of the WannaCry threat.

Examining the Breadth of WannaCry Infections

When detected either through signature or behavioral approaches, it is very simple to see which other systems have actually likewise been infected or are displaying similar behaviors.

Detecting WannaCry with Ziften and Splunk

After evaluating this information, I chose to run the WannaCry sample in my own environment on a susceptible system. I had one vulnerable system running the Zenith agent, and in this case my Zenith server was already configured to integrate with Splunk. This enabled me to look at the exact same information inside Splunk. Let me elucidate about the integration we have with Splunk.

We have 2 Splunk apps for Zenith. The very first is our technology add on (TA): its role is to consume and index ALL the raw data from the Zenith server that the Ziften agents generate. As this info arrives it is massaged into Splunk’s Common Info Model (CIM) so that it can be stabilized and easily browsed in addition to used by other apps such as the Splunk App for Enterprise Security (Splunk ES). The Ziften TA likewise includes Adaptive Response abilities for taking actions from events that are rendered in Splunk ES. The 2nd app is a control panel for displaying our data with all the charts and graphs readily available in Splunk to facilitate digesting the data a lot easier.

Considering that I currently had the information on how the WannaCry exploit behaved in our research lab, I had the advantage of knowing exactly what to search for in Splunk using the Zenith data. In this case I had the ability to see a signature alert by utilizing the VirusTotal integration with our Splunk app (see Figure 4).

Hazard Hunting for WannaCry Ransomware in Ziften and Splunk

But I wanted to wear my “incident responder hat” and examine this in Splunk utilizing the Zenith agent information. My very first thought was to search the systems in my lab for ones running SMB, because that was the initial vector for the WannaCry attack. The Zenith data is encapsulated in various message types, and I knew that I would most likely find SMB data in the running process message type, however, I used Splunk’s * regex with the Zenith sourcetype so I could browse all Zenith data. The resulting search appeared like ‘sourcetype= ziften: zenith: * smb’. As I expected I received 1 result back for the system that was running SMB (see Figure 5).

My next step was to use the exact same behavioral search we have in Zenith that looks for normal CryptoWare and see if I might get outcomes back. Once again this was extremely simple to do from the Splunk search panel. I used the same wildcard sourcetype as previously so I could browse throughout all Zenith data and this time I added the ‘delete shadows’ string search to see if this behavior was ever released at the command line. My search looked like ‘sourcetype= ziften: zenith: * delete shadows’. This search returned outcomes, shown in Figure 6, that revealed me in detail the procedure that was developed and the full command line that was carried out.

Having all this info within Splunk made it very easy to determine which systems were susceptible and which systems had actually currently been jeopardized.

WannaCry Remediation Using Splunk and Ziften

Among the next steps in any kind of breach is to remediate the compromise as quick as possible to prevent more destruction and to take action to prevent any other systems from being compromised. Ziften is one of the Splunk initial Adaptive Response members and there are a variety of actions (see Figure 7) that can be taken through Spunk’s Adaptive Response to alleviate these threats through extensions on Zenith.

In the case of WannaCry we actually might have used nearly any of the Adaptive Response actions presently readily available by Zenith. When aiming to minimize the effect and avoid WannaCry initially, one action that can occur is to shut down SMB on any systems running the Zenith agent where the variation of SMB running is understood to be susceptible. With a single action Splunk can pass to Zenith the agent ID’s or the IP Address of all the vulnerable systems where we wished to stop the SMB service, hence preventing the threat from ever occurring and allowing the IT Operations group to get those systems patched before beginning the SMB service again.

Preventing Ransomware from Spreading or Exfiltrating Data

Now in the event that we have actually already been compromised, it is vital to prevent further exploitation and stop the possible exfiltration of sensitive info or company intellectual property. There are really 3 actions we might take. The first two are similar where we might eliminate the destructive process by either PID (process ID) or by its hash. This is effective, however since oftentimes malware will just spawn under a brand-new procedure, or be polymorphic and have a various hash, we can apply an action that is ensured to prevent any inbound or outgoing traffic from those infected systems: network quarantine. This is another example of an Adaptive Response action offered from Ziften’s integration with Splunk ES.

WannaCry is already lessening, however ideally this technical blog reveals the worth of the Ziften and Splunk integration in handling ransomware dangers against the end point.

Organizations Need To Increase Their Paranoia Over Security – Chuck Leaver

Written By Chuck Leaver Ziften CEO

 

Whatever you do don’t ignore cyber security criminals. Even the most paranoid “typical” person would not stress over a source of data breaches being stolen qualifications from its heating, ventilation and a/c (HEATING AND COOLING) specialist. Yet that’s exactly what took place at Target in November 2013. Hackers got into Target’s network utilizing credentials provided to the specialist, probably so they could track the heating, ventilation and air conditioning system. (For a good analysis, see Krebs on Security). And after that hackers were able to leverage the breach to inject malware into point of sale (POS) systems, and then offload payment card information.

A number of ridiculous errors were made here. Why was the HEATING AND COOLING specialist provided access to the enterprise network? Why wasn’t the HEATING AND COOLING system on a different, totally isolated network? Why wasn’t the POS system on a separate network? Et cetera, et cetera.

The point here is that in a very complex network, there are uncounted prospective vulnerabilities that could be exploited through recklessness, unpatched software, default passwords, social engineering, spear phishing, or insider actions. You get the point.

Whose task is it to discover and repair those vulnerabilities? The security team. The CISO’s team. Security specialists aren’t “normal” people. They are hired to be paranoid. Make no mistake, no matter the particular technical vulnerability that was made use of, this was a CISO failure to prepare for the worst and prepare appropriately.

I can’t talk to the Target HEATING AND COOLING breach specifically, but there is one frustrating reason that breaches like this happen: A lack of budgetary top priority for cybersecurity. I’m not sure how frequently companies fail to fund security just due to the fact that they’re inexpensive and would rather do a share buy-back. Or possibly the CISO is too shy to request for what’s required, or has actually been informed that he gets a 5% boost, irrespective of the requirement. Maybe the CEO is worried that disclosures of big allotments for security will startle investors. Maybe the CEO is merely naïve enough to think that the enterprise will not be targeted by cyber criminals. The problem: Every company is targeted by hackers.

There are big competitions over spending plans. The IT department wishes to fund upgrades and enhancements, and attack the stockpile of demand for brand-new and improved applications. On the other side, you have line-of-business leaders who see IT projects as directly assisting the bottom line. They are optimists, and have great deals of CEO attention.

By contrast, the security department frequently needs to defend crumbs. They are viewed as a cost center. Security reduces company risk in a way that matters to the CFO, the CRO (chief risk officer, if there is one), the general counsel, and other pessimists who appreciate compliance and reputation. These green-eyeshade people think of the worst case situations. That doesn’t make good friends, and budget dollars are assigned reluctantly at a lot of companies (up until the business gets burned).

Call it naivety, call it established hostility, however it’s a genuine obstacle. You cannot have IT provided fantastic tools to move the business forward, while security is starved and making do with second-best.

Worse, you do not want to wind up in scenarios where the rightfully paranoid security groups are dealing with tools that do not fit together well with their IT counterpart’s tools.

If IT and security tools don’t mesh well, IT may not be able to rapidly act to react to risky situations that the security teams are keeping an eye on or are worried about – things like reports from risk intelligence, discoveries of unpatched vulnerabilities, nasty zero-day exploits, or user habits that suggest risky or suspicious activity.

One idea: Find tools for both departments that are developed with both IT and security in mind, right from the beginning, rather than IT tools that are patched to supply some very little security ability. One budget product (take it out of IT, they have more cash), however 2 workflows, one designed for the IT expert, one for the CISO group. Everyone wins – and next time somebody wants to offer the HVAC specialist access to the network, maybe security will discover what IT is doing, and head that disaster off at the pass.

WannCry Ransomware – How Ziften Can Help You – Chuck Leaver

Written By Michael Vaughn And Presented By Chuck Leaver Ziften CEO

 

Answers To Your Concerns About WannaCry Ransomware

The WannaCry ransomware attack has actually infected more than 300,000 computer systems in 150 countries so far by making use of vulnerabilities in Microsoft’s Windows os.
In this brief video Chief Data Scientist Dr. Al Hartmann and I go over the nature of the attack, in addition to how Ziften can assist companies secure themselves from the exploit called “EternalBlue.”.

As mentioned in the video, the problem with this Server Message Block (SMB) file-sharing service is that it’s on many Windows os and discovered in many environments. However, we make it easy to identify which systems in your environment have actually or haven’t been patched yet. Significantly, Ziften Zenith can likewise remotely disable the SMB file-sharing service totally, giving organizations valuable time to guarantee that those machines are correctly patched.

If you’re curious about Ziften Zenith, our 20 minute demonstration includes an assessment with our experts around how we can assist your company prevent the worst digital catastrophe to strike the web in years.

Assess Next Generation Endpoint Security Solutions With These Steps – Chuck Leaver

Written By Roark Pollock And Presented By Chuck Leaver CEO Ziften

 

The Endpoint Security Purchaser’s Guide

The most typical point for a sophisticated persistent attack or a breach is the end point. And they are definitely the entry point for the majority of ransomware and social engineering attacks. Making use of endpoint protection products has long been thought about a best practice for securing endpoints. Unfortunately, those tools aren’t staying up to date with today’s risk environment. Advanced hazards, and truth be told, even less advanced threats, are often more than adequate for deceiving the typical staff member into clicking something they should not. So companies are taking a look at and evaluating a variety of next generation end point security (NGES) solutions.

With that in mind, here are ten tips to think about if you’re taking a look at NGES services.

Tip 1: Begin with the end in mind

Do not let the tail wag the dog. A threat reduction method ought to constantly begin by evaluating issues and after that trying to find possible fixes for those issues. But all too often we get enamored with a “shiny” brand-new innovation (e.g., the most recent silver bullet) and we end up trying to shoehorn that technology into our environments without fully assessing if it solves an understood and determined problem. So exactly what problems are you aiming to fix?

– Is your existing end point protection tool failing to stop dangers?
– Do you require better visibility into activity on the end point?
– Are compliance requirements dictating constant end point tracking?
– Are you trying to reduce the time and expense of incident response?

Define the issues to address, then you’ll have a measuring stick for success.

Pointer 2: Know your audience. Exactly who will be using the tool?

Understanding the problem that needs to be resolved is a key initial step in understanding who owns the problem and who would (operationally) own the service. Every functional team has its strengths, weak points, choices and prejudices. Specify who will need to utilize the solution, and others that might take advantage of its use. Maybe it’s:

– Security operations,
– IT operations,
– The governance, risk & compliance (GRC) group,
– Helpdesk or end user support group,
– And even the server group, or a cloud operations group?

Pointer 3: Know exactly what you mean by endpoint

Another often neglected early step in defining the issue is defining the end point. Yes, all of us used to know exactly what we implied when we said end point however today end points come in a lot more varieties than before.

Sure we want to protect desktops and laptops but how about mobile devices (e.g. phones and tablets), virtual end points, cloud based end points, or Internet of Things (IoT) devices? And how about your servers? All these devices, of course, are available in numerous flavors so platform assistance has to be attended to too (e.g. Windows only, Mac OSX, Linux, etc?). Likewise, consider support for endpoints even when they are working remote, or are working offline. What are your requirements and exactly what are “good to haves?”

Tip 4: Start with a foundation of constant visibility

Continuous visibility is a foundational capability for attending to a host of security and functional management problems on the end point. The old adage holds true – that you can’t manage exactly what you cannot see or determine. Even more, you can’t protect what you can’t properly manage. So it must begin with continuous or all the time visibility.

Visibility is foundational to Management and Security

And think about exactly what visibility suggests. Enterprises need a single source of truth that at a minimum monitors, stores, and evaluates the following:

– System data – events, logs, hardware state, and file system details
– User data – activity logs and habit patterns
– Application data – attributes of installed apps and use patterns
– Binary data – attributes of set up binaries
– Processes data – tracking details and stats
– Network connectivity data – stats and internal habits of network activity on the host

Pointer 5: Track your visibility data

Endpoint visibility data can be saved and analyzed on the premises, in the cloud, or some mix of both. There are benefits to each. The proper technique varies, but is generally enforced by regulative requirements, internal privacy policies, the endpoints being monitored, and the total cost considerations.

Know if your company requires on premise data retention

Know whether your organization allows for cloud based data retention and analysis or if you are constrained to on premise solutions only. Within Ziften, 20-30% of our clients keep data on premise just for regulatory factors. Nevertheless, if lawfully an alternative, the cloud can provide expense advantages (to name a few).

Idea 6: Know exactly what is on your network

Comprehending the problem you are trying to resolve needs understanding the assets on the network. We find that as many as 30% of the end points we initially discover on customers’ networks are unmanaged or unidentified devices. This certainly develops a big blind spot. Minimizing this blind spot is a vital best practice. In fact, SANS Critical Security Controls 1 and 2 are to perform a stock of authorized and unapproved devices and software applications attached to your network. So search for NGES services that can fingerprint all connected devices, track software inventory and utilization, and carry out on-going continuous discovery.

Idea 7: Know where you are vulnerable

After finding out exactly what devices you have to track, you have to make certain they are running in up to date setups. SANS Critical Security Controls 3 recommends making sure secure setups tracking for laptops, workstations, and servers. SANS Critical Security Controls 4 advises making it possible for continuous vulnerability evaluation and removal of these devices. So, try to find NGES solutions that provide all the time monitoring of the state or posture of each device, and it’s even better if it can help enforce that posture.

Likewise search for services that provide continuous vulnerability assessment and remediation.

Keeping your total endpoint environment solidified and free of vital vulnerabilities prevents a huge quantity of security issues and removes a lot of back end pressure on the IT and security operations teams.

Suggestion 8: Cultivate continuous detection and response

An important objective for lots of NGES solutions is supporting continuous device state monitoring, to enable effective hazard or event response. SANS Critical Security Control 19 recommends robust incident response and management as a best practice.

Try to find NGES solutions that provide all the time or continuous threat detection, which leverages a network of global threat intelligence, and several detection techniques (e.g., signature, behavioral, artificial intelligence, etc). And try to find event response solutions that assist prioritize determined threats and/or concerns and offer workflow with contextual system, application, user, and network data. This can assist automate the suitable response or next actions. Lastly, comprehend all the response actions that each service supports – and look for a solution that offers remote access that is as close as possible to “sitting at the endpoint keyboard”.

Pointer 9: Consider forensics data collection

In addition to incident response, organizations need to be prepared to deal with the need for forensic or historical data analysis. The SANS Critical Security Control 6 advises the maintenance, monitoring and analysis of all audit logs. Forensic analysis can take numerous forms, but a foundation of historic endpoint monitoring data will be crucial to any examination. So try to find solutions that preserve historic data that permits:

– Forensic tasks consist of tracing lateral danger movement through the network over time,
– Pinpointing data exfiltration efforts,
– Identifying source of breaches, and
– Figuring out suitable removal actions.

Pointer 10: Take down the walls

IBM’s security group, which supports an impressive community of security partners, approximates that the average business has 135 security tools in place and is working with 40 security vendors. IBM customers definitely tend to be large enterprise but it’s a common refrain (grievance) from organizations of all sizes that security solutions do not integrate properly.

And the problem is not just that security solutions don’t play well with other security solutions, but likewise that they don’t always integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations need to think about these (and other) integration points in addition to the supplier’s determination to share raw data, not just metadata, through an API.

Additional Pointer 11: Plan for modifications

Here’s a bonus pointer. Assume that you’ll want to customize that shiny new NGES service quickly after you get it. No solution will satisfy all your needs right out of the box, in default setups. Find out how the solution supports:

– Customized data collection,
– Notifying and reporting with custom data,
– Custom-made scripting, or
– IFTTT (if this then that) functionality.

You know you’ll desire brand-new paint or brand-new wheels on that NGES solution soon – so make certain it will support your future modification projects easy enough.

Try to find support for simple customizations in your NGES solution

Follow the bulk of these suggestions and you’ll certainly avoid many of the typical errors that pester others in their assessments of NGES services.

Ziften Leads The Way In End To End Protection – Chuck Leaver

Written By Ziften CEO Chuck Leaver

 

Do you wish to manage and safeguard your end points, your data center, the cloud and your network? In that case Ziften has the right solution for you. We gather data, and allow you to correlate and use that data to make decisions – and be in control over your enterprise.

The details that we obtain from everyone on the network can make a real world distinction. Think about the proposition that the 2016 U.S. elections were influenced by hackers from another nation. If that holds true, cyber criminals can do almost anything – and the concept that we’ll go for that as the status quo is simply ludicrous.

At Ziften, our company believe the way to combat those threats is with greater visibility than you’ve ever had. That visibility goes across the entire business, and links all the major players together. On the back end, that’s real and virtual servers in the data center and the cloud. That’s applications and containers and infrastructure. On the other side, it’s laptops and desktop computers, irrespective of where and how they are connected.

End-to-end – that’s the believing behind everything at Ziften. From endpoint to the cloud, all the way from an internet browser to a DNS server. We tie all that together, with all the other parts to give your service a total service.

We likewise capture and save real-time data for as much as 12 months to let you know what’s taking place on the network today, and provide historic trend analysis and warnings if something changes.

That lets you identify IT faults and security concerns immediately, and also have the ability to ferret out the source by looking back in time to see where a breach or fault might have first happened. Active forensics are an absolute need in this business: After all, where a breach or fault initiated an alarm may not be the place where the problem started – or where a hacker is running.

Ziften provides your security and IT groups with the visibility to understand your current security posture, and identify where improvements are required. Endpoints non-compliant? Found. Rogue devices? Found. Penetration off-network? This will be detected. Out-of-date firmware? Unpatched applications? All discovered. We’ll not just assist you find the issue, we’ll help you repair it, and make certain it stays fixed.

End to end IT and security management. Real-time and historical active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s exactly what makes Ziften better.

Our Enhanced NetFlow Will Help You Track Cloud Activities – Chuck Leaver

Written by Roark Pollock and Presented by Ziften CEO Chuck Leaver

 

According to Gartner the public cloud services market surpassed $208 billion last year (2016). This represented about a 17% increase year over year. Not bad when you consider the ongoing issues most cloud clients still have concerning data security. Another particularly intriguing Gartner discovery is the common practice by cloud consumers to contract services to numerous public cloud companies.

In accordance with Gartner “most companies are currently utilizing a mix of cloud services from different cloud providers”. While the business rationale for making use of numerous vendors is sound (e.g., preventing supplier lock in), the practice does develop additional complexity inmonitoring activity throughout an company’s increasingly dispersed IT landscape.

While some service providers support more superior visibility than others (for instance, AWS CloudTrail can monitor API calls across the AWS infrastructure) companies need to comprehend and deal with the visibility issues related to moving to the cloud despite the cloud service provider or suppliers they deal with.

Sadly, the ability to monitor user and application activity, and networking communications from each VM or endpoint in the cloud is restricted.

Regardless of where computing resources live, organizations must answer the questions of “Which users, machines, and applications are interacting with each other?” Organizations need visibility throughout the infrastructure in order to:

  • Quickly determine and focus on problems
  • Speed origin analysis and identification
  • Lower the mean-time to fix issues for end users
  • Quickly determine and get rid of security hazards, decreasing overall dwell times.

Conversely, bad visibility or poor access to visibility data can decrease the efficiency of current security and management tools.

Businesses that are familiar with the maturity, ease, and reasonably cheapness of monitoring physical data centers are apt to be disappointed with their public cloud options.

What has been missing is a simple, ubiquitous, and stylish solution like NetFlow for public cloud infrastructure.

NetFlow, naturally, has actually had twenty years or so to become a de facto standard for network visibility. A typical deployment includes the monitoring of traffic and aggregation of flows at network chokepoints, the collection and saving of flow info from numerous collection points, and the analysis of this flow information.

Flows consist of a fundamental set of source and destination IP addresses and port and protocol data that is typically gathered from a router or switch. Netflow data is relatively low-cost and easy to gather and provides nearly common network visibility and enables analysis which is actionable for both network tracking and
performance management applications.

Many IT personnel, particularly networking and some security groups are very comfy with the technology.

However NetFlow was created for fixing exactly what has actually ended up being a rather minimal issue in the sense that it only collects network data and does so at a minimal number of prospective places.

To make much better use of NetFlow, two crucial changes are necessary.

NetFlow at the Edge: First, we have to broaden the helpful deployment situations for NetFlow. Instead of just gathering NetFlow at networking choke points, let’s broaden flow collection to the edge of the network (clients, cloud, and servers). This would significantly expand the overall view that any NetFlow analytics offer.

This would allow companies to augment and take advantage of existing NetFlow analytics tools to remove the ever increasing blind spot of visibility into public cloud activity.

Rich, contextual NetFlow: Second, we have to use NetFlow for more than basic visibility of the network.

Instead, let’s utilize an extended version of NetFlow and include details on the device, application, user, and binary responsible for each tracked network connection. That would allow us to quickly associate every network connection back to its source.

In fact, these two modifications to NetFlow, are exactly what Ziften has actually accomplished with ZFlow. ZFlow provides an broadened variation of NetFlow that can be deployed at the network edge, including as part of a VM or container image, and the resulting info collection can be consumed and analyzed with existing NetFlow analysis tools. Over and above traditional NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow supplies extended visibility with the addition of details on device, application, user and binary for every network connection.

Ultimately, this allows Ziften ZFlow to provide end to end visibility in between any two endpoints, physical or virtual, getting rid of conventional blind spots like East West traffic in data centers and business cloud implementations.

Second Part Of Why Edit Difference Is Important – Chuck Leaver

Written By Jesse Sampson And Presented By Chuck Leaver CEO Ziften

 

In the first post on edit distance, we took a look at hunting for destructive executables with edit distance (i.e., the number of character modifications it takes to make two text strings match). Now let’s look at how we can use edit distance to look for destructive domains, and how we can build edit distance functions that can be combined with other domain name features to pinpoint suspicious activity.

Case Study Background

Exactly what are bad actors trying to do with malicious domains? It may be merely using a similar spelling of a common domain name to fool reckless users into looking at ads or getting adware. Legitimate sites are gradually picking up on this strategy, in some cases called typo squatting.

Other harmful domains are the product of domain name generation algorithms, which can be used to do all kinds of nefarious things like avert countermeasures that obstruct known compromised websites, or overwhelm domain name servers in a dispersed DOS attack. Older versions use randomly-generated strings, while further advanced ones include tricks like injecting typical words, further confusing defenders.

Edit distance can aid with both use cases: here we will find out how. Initially, we’ll omit typical domains, because these are usually safe. And, a list of regular domain names offers a baseline for finding anomalies. One excellent source is Quantcast. For this discussion, we will adhere to domain names and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleansing, we compare each candidate domain name (input data observed in the wild by Ziften) to its prospective neighbors in the exact same top level domain (the last part of a domain name – classically.com,. org, and so on now can be almost anything). The fundamental job is to discover the nearest neighbor in terms of edit distance. By finding domains that are one step away from their nearby next-door neighbor, we can quickly find typo-ed domains. By finding domain names far from their neighbor (the normalized edit distance we presented in Part 1 is useful here), we can likewise find anomalous domain names in the edit distance area.

Exactly what were the Outcomes

Let’s take a look at how these results appear in real life. Take care browsing to these domains considering that they might include malicious content!

Here are a few prospective typos. Typo-squatters target well known domain names considering that there are more chances someone will go to them. Several of these are suspicious according to our risk feed partners, but there are some false positives as well with charming names like “wikipedal”.

ed2-1

Here are some weird looking domain names far from their next-door neighbors.

ed2-2

So now we have actually produced two helpful edit distance metrics for searching. Not only that, we have 3 features to potentially add to a machine-learning design: rank of closest next-door neighbor, range from next-door neighbor, and edit distance 1 from next-door neighbor, showing a danger of typo tricks. Other features that might be utilized well with these include other lexical features such as word and n-gram distributions, entropy, and the length of the string – and network functions like the number of unsuccessful DNS requests.

Streamlined Code that you can Play Around with

Here is a simplified version of the code to play with! Created on HP Vertica, but this SQL should work on many sophisticated databases. Keep in mind the Vertica editDistance function might vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

Environments That Are Inadequately Managed Cannot Be Totally Secure – Chuck Leaver

Written by Chuck Leaver Ziften CEO

 

If your enterprise computing environment is not properly managed there is no way that it can be absolutely safe and secure. And you cannot effectively manage those intricate enterprise systems unless there’s a strong feeling that they are secure.

Some might call this a chicken-and-egg circumstance, where you don’t know where to begin. Should you start with security? Or should you begin with system management? That’s the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter first. Instead, both are mixed together – and treated as a single scrumptious reward.

Many companies, I would argue too many organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO team have no idea each other, talk to each other only when definitely necessary, have distinct budget plans, certainly have different goals, read different reports, and utilize various management platforms. On a day-to-day basis, what makes up a task, a concern or an alert for one team flies completely under the other group’s radar.

That’s not good, because both the IT and security groups should make presumptions. The IT team believes that all assets are safe and secure, unless somebody tells them otherwise. For instance, they assume that devices and applications have not been compromised, users have actually not escalated their privileges, and so on. Likewise, the security team presumes that the servers, desktops, and mobiles are working effectively, operating systems and apps are up to date, patches have been applied, etc

Given that the CIO and CISO teams aren’t talking with each other, do not comprehend each others’ functions and goals, and aren’t using the very same tools, those presumptions might not be correct.

And once again, you can’t have a protected environment unless that environment is appropriately managed – and you cannot manage that environment unless it’s protected. Or to put it another way: An environment that is not secure makes anything you carry out in the IT organization suspect and unimportant, and implies that you cannot understand whether the details you are seeing are correct or controlled. It might all be phony news.

How to Bridge the IT / Security space

Ways to bridge that space? It sounds simple but it can be challenging: Guarantee that there is an umbrella covering both the IT and security groups. Both IT and security report to the same person or organization somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the company does not have a secure environment, and there’s a breach, the worth of the brand name and the company may be lowered to absolutely nothing. Likewise, if the users, devices, infrastructure, application, and data aren’t managed well, the company cannot work successfully, and the value drops. As we’ve talked about, if it’s not properly managed, it cannot be secured, and if it’s not protected, it cannot be well handled.

The fiduciary responsibility of senior executives (like the CFO) is to secure the value of company assets, which means making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of responsibility.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security capabilities. No, it’s a Peanut Butter Cup, created similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that provides IT teams exactly what they require to do their tasks, and provides security teams exactly what they require also – without coverage spaces that could weaken assumptions about the state of enterprise security and IT management.

We have to make sure that our business’s IT infrastructure is developed on a safe and secure structure – and also that our security is executed on a well managed base of hardware, infrastructure, software applications and users. We can’t run at peak efficiency, and with full fiduciary duty, otherwise.