The Consequences You Can Expect From Machine Learning – Chuck Leaver

Written By Roark Pollock And Presented By Ziften CEO Chuck Leaver

 

If you are a student of history you will observe lots of examples of serious unexpected consequences when new technology has actually been presented. It frequently surprises people that new technologies may have wicked intentions as well as the positive intentions for which they are brought to market but it occurs all the time.

For instance, Train robbers using dynamite (“You think you utilized adequate Dynamite there, Butch?”) or spammers using e-mail. More recently the use of SSL to conceal malware from security controls has actually ended up being more common because the genuine use of SSL has made this technique better.

Due to the fact that brand-new technology is typically appropriated by bad actors, we have no need to think this will not be true about the new generation of machine-learning tools that have reached the marketplace.

To what degree will these tools be misused? There are most likely a number of ways that attackers could use machine-learning to their advantage. At a minimum, malware writers will evaluate their new malware against the new class of advanced hazard protection solutions in a bid to customize their code to ensure that it is less likely to be flagged as harmful. The effectiveness of protective security controls always has a half-life due to adversarial learning. An understanding of artificial intelligence defenses will assist opponents be more proactive in minimizing the efficiency of artificial intelligence based defenses. An example would be an enemy flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being built from that traffic. The objective of the assailant would be to deceive the protector’s artificial intelligence tool into misclassifying traffic or to create such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely likewise be utilized as an attack tool by enemies. For instance, some scientists predict that enemies will utilize machine learning techniques to hone their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to tailor a social engineering attack is especially uncomfortable provided the effectiveness of spear phishing. The capability to automate mass personalization of these attacks is a powerful economic reward for hackers to embrace the techniques.

Expect breaches of this type that provide ransomware payloads to rise greatly in 2017.

The need to automate jobs is a significant driver of investment choices for both aggressors and protectors. Artificial intelligence guarantees to automate detection and response and increase the functional tempo. While the innovation will progressively end up being a standard component of defense in depth techniques, it is not a magical solution. It must be understood that opponents are actively working on evasion techniques around artificial intelligence based detection solutions while likewise using artificial intelligence for their own attack functions. This arms race will require defenders to significantly achieve incident response at machine speed, further worsening the requirement for automated incident response capabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *