SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Countering Intelligent Malware: Neural Networks, AI, and Security


It's coming. AI will be an embedded part of other existing software security techniques and skills.

Cyber AI

The development effort and technology for cybersecurity is headed to increasing use of artificial intelligence (AI), in its broadest sense with elements of machine learning and pattern recognition, inevitably leading on to situational and environmental awareness, application of game theory, and possibly gamification of the human-machine interface. These are all areas in which the theory and practice of AI in other fields, outside cybersecurity, can be imported and used to accelerate the development of strong AI tools to secure our networks, computer systems, and connected network devices. AI startups, such as Cobalt, CogniCor, Humtap, Intraspexion, Kimera Systems, NoorGo, Orb Intelligence, and Verve.ai, have the potential to eventually apply their products to protecting data and cybersecurity. 

We envisage an environment where traditional policies and rules are replaced by deep-trained neural networks where human embedded knowledge was just the starting point. These cybersecurity systems will start with a generic knowledge of the history of attack patterns, exploit sources, and vulnerabilities. But they will rapidly learn the normal ranges of behavior of the networks and systems they look after, and will then identify suspicious departures from normal behavior. Occasionally, there may be human intervention to re-train neural nets after attacks, but there are too few security experts being educated to meet the existing need. The aim is clear: systems must learn to train themselves.

The notion of self-learning machines has been around for at least forty years in the wider AI world. At the core of the notion of Machine Intelligence and Machine Learning is the biological concept of the autonomously mutating and self-replicating computer program. We already have machines that learn. They learn in such a way that the capabilities they evolve can even surprise their developers. Right now, the problems they learn to tackle are fairly constrained – they mostly provide search answers and play games, or both as in beating Jeopardy. Learning to play and win games such as chess and go is non-trivial. Nevertheless, the approaches of these deep strategy games will likely be different from the approaches needed in the multiplayer, positive feedback game of cybersecurity. AI is just not yet there.

It’s something to look forward to, the notion of a brigade of smart cyber defenses that understand what’s going on, observe anomalies, learn how to handle them. And then each instance learns from the others and upgrades itself to become an even smarter composite machine. Don’t expect anything exactly like that in your network tomorrow, but its coming.

AI will be an embedded part of other existing software security techniques and skills. After a threat is identified, it must be countered. Countering strategies will become as rich as threat identification. Sometimes network paths carrying an attack will be blocked with NFV commands. When possible, effected traffic will be cleaned. Deploying multi-layered, hybrid response solutions, such as is done today in Arbor Networks’ products, are just a prolog to NFV enabled multi-phase AI responses.

Malthesian malwear

But the problem will grow; a positive feedback game does not relent. As autonomous malware defenses become smarter, the malware will become smarter too. The bad guys will also use this technology, particularly those with better funding, in due course. Eventually, it becomes an autonomic arms race happening at processor speed, with malware AI on one side, and anti-malware AI on the other. Where this ends is uncertain, but it's likely both sides will have their share of victories until both sides are able to reach a sentient state in a War Games like, stalemate scenario.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel