SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Closing the Cyber Attack Gap with AI

By: Jacob Ukelson, D.Sc.

The gap between the ability of cyber attackers to breach IT networks and the effectiveness of cyber defenses is widening. The elements contributing to this include automation and sophistication. Automation enables attackers to go after many more targets at very low cost to them. The increased sophistication of the attacks makes it more difficult to detect and defend against them.

The state of cyber defense today is that there is still a heavy reliance on manual processes. Security operations center (SOC) personnel need to respond to alerts and make decisions as to which alerts to investigate. With attackers highly automated and the defenders highly manual, it is easy to understand why we see ever-increasing losses to cybercrime.

Artificial intelligence would seem to be an obvious answer to closing the gap. After all, across industries, such technology is replacing or augmenting human expertise with an automated expert system. The promised benefits of AI in cyber defense, however, have been largely unfulfilled. This article reviews how AI technologies have been applied to cyber defense and examines emerging AI approaches that could make big progress in closing the cyber attack gap.

AI in cybersecurity

To begin, we should clarify some of the terms used when discussing AI. Cybersecurity vendors often claim their products use AI, or machine learning, or machine reasoning—sometimes interchangeably. Think of AI as an umbrella term encompassing different computer-based technologies that replicate human problem-solving or decision-making. Machine learning and machine reasoning are two types of AI technology that are used to solve different problems.

Machine learning applies statistical analysis and pattern recognition to large data sets to uncover patterns of behavior. Some common applications are speech and image recognition, traffic predictions, and fraud detection. Machine learning is also the more widely used AI technology in cybersecurity. It is primarily used for threat detection (real-time or post-event). For example, machine learning is the technology behind behavioral-based endpoint security systems. It is also used for anomaly-based threat detection in large networks, integrating and processing very large and disparate event log files.

Two issues have impeded progress in machine-learning based threat detection. The systems tend to have too many false positives while, at the same time, attackers are often able to modify their techniques to avoid detection. While AI has helped SOC teams manage workloads, it has not reversed the tide of breaches.

Organizations have concluded that they cannot stop all breaches. They are now pursuing a strategy of prevention combined with resilience. This approach seeks to minimize the chances of being breached while also minimizing the potential loss in the event of a breach. New developments in applying machine reasoning to this challenge are showing promise.

Machine reasoning-based risk and resilience management

Machine reasoning is a well-developed AI technology that many of us use in our daily lives. Personal assistants such as Siri and Alexa use machine reasoning to generate answers to the questions we ask—including questions they have never encountered before. So how can machine reasoning be applied to the challenge of prevention and resilience?

While machine learning is based on the statistical identification of hidden patterns within a large amount of data, machine reasoning is based on using facts and relationships, and drawing conclusions from them. Machine reasoning uses concepts and ideas coded as symbols. Reasoning systems represent data by semantic knowledge graphs that allow the machine to understand the meaning of the data through the semantics encoded in the graph, and to draw conclusions about that data by analyzing the graph of concepts and projecting them onto the new data.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel