Written by Roark Pollock and Presented by Ziften CEO Chuck Leaver
According to Gartner the public cloud services market surpassed $208 billion last year (2016). This represented about a 17% increase year over year. Not bad when you consider the ongoing issues most cloud clients still have concerning data security. Another particularly intriguing Gartner discovery is the common practice by cloud consumers to contract services to numerous public cloud companies.
In accordance with Gartner “most companies are currently utilizing a mix of cloud services from different cloud providers”. While the business rationale for making use of numerous vendors is sound (e.g., preventing supplier lock in), the practice does develop additional complexity inmonitoring activity throughout an company’s increasingly dispersed IT landscape.
While some service providers support more superior visibility than others (for instance, AWS CloudTrail can monitor API calls across the AWS infrastructure) companies need to comprehend and deal with the visibility issues related to moving to the cloud despite the cloud service provider or suppliers they deal with.
Sadly, the ability to monitor user and application activity, and networking communications from each VM or endpoint in the cloud is restricted.
Regardless of where computing resources live, organizations must answer the questions of “Which users, machines, and applications are interacting with each other?” Organizations need visibility throughout the infrastructure in order to:
- Quickly determine and focus on problems
- Speed origin analysis and identification
- Lower the mean-time to fix issues for end users
- Quickly determine and get rid of security hazards, decreasing overall dwell times.
Conversely, bad visibility or poor access to visibility data can decrease the efficiency of current security and management tools.
Businesses that are familiar with the maturity, ease, and reasonably cheapness of monitoring physical data centers are apt to be disappointed with their public cloud options.
What has been missing is a simple, ubiquitous, and stylish solution like NetFlow for public cloud infrastructure.
NetFlow, naturally, has actually had twenty years or so to become a de facto standard for network visibility. A typical deployment includes the monitoring of traffic and aggregation of flows at network chokepoints, the collection and saving of flow info from numerous collection points, and the analysis of this flow information.
Flows consist of a fundamental set of source and destination IP addresses and port and protocol data that is typically gathered from a router or switch. Netflow data is relatively low-cost and easy to gather and provides nearly common network visibility and enables analysis which is actionable for both network tracking and
performance management applications.
Many IT personnel, particularly networking and some security groups are very comfy with the technology.
However NetFlow was created for fixing exactly what has actually ended up being a rather minimal issue in the sense that it only collects network data and does so at a minimal number of prospective places.
To make much better use of NetFlow, two crucial changes are necessary.
NetFlow at the Edge: First, we have to broaden the helpful deployment situations for NetFlow. Instead of just gathering NetFlow at networking choke points, let’s broaden flow collection to the edge of the network (clients, cloud, and servers). This would significantly expand the overall view that any NetFlow analytics offer.
This would allow companies to augment and take advantage of existing NetFlow analytics tools to remove the ever increasing blind spot of visibility into public cloud activity.
Rich, contextual NetFlow: Second, we have to use NetFlow for more than basic visibility of the network.
Instead, let’s utilize an extended version of NetFlow and include details on the device, application, user, and binary responsible for each tracked network connection. That would allow us to quickly associate every network connection back to its source.
In fact, these two modifications to NetFlow, are exactly what Ziften has actually accomplished with ZFlow. ZFlow provides an broadened variation of NetFlow that can be deployed at the network edge, including as part of a VM or container image, and the resulting info collection can be consumed and analyzed with existing NetFlow analysis tools. Over and above traditional NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow supplies extended visibility with the addition of details on device, application, user and binary for every network connection.
Ultimately, this allows Ziften ZFlow to provide end to end visibility in between any two endpoints, physical or virtual, getting rid of conventional blind spots like East West traffic in data centers and business cloud implementations.