Digital Edition

SYS-CON.TV
Why Rule-Based Log Correlation Is Almost a Good Idea - Part 2
Modeling attack scenarios? Is it possible?

Rule-based log correlation is based on modeling attack scenarios
Back to the visibility aspect.

"By managing all your logs you get universal visibility in everything that is happening in your IT infrastructure." Yes, this is a true statement.

But to tell that you can easily flag security attacks using rule-based correlation is a major overstatement.

Rule-based correlation essentially automates the "If this is happening here" and "That is happening there" then "We have a problem." More precisely, "If this precise event is taking place at this particular time in this specific device" and "That precise event is taking place at that particular time in that specific device" then "We may have a problem." Of course, you can set a time window (not too wide as we'll see later) and you can specify a group of devices (not too many as we'll also see later), but in the core of the engine it will get translated to plenty of single specific rules.

Rule-based log correlation automates the inference of information by looking at several sets of logs. Not a bad idea from a theoretical standpoint, but far, far from the "end all, be all" to security touted by many vendors as a "plug and play," easy-to-use low TCO solution.

We first need to start by modeling a specific attack scenario, by understanding the different steps involved in this attack, and then by programming rules to synthetize these steps.

Once this phase is done and only after it is properly done, and as we'll see below it is not an easy task, then you can put together a set of correlation rules to automate the decision of whether or not an attack is taking place. And ring an alert if required.

Simple enough, right?

At a high level, yes.

But think about it. Are attacks deterministic in nature? No. So trying to model an attack as a series of discrete steps will just not work.

An outdated, static approach that doesn't scale
This reminds me of the early days of IDS, when it was based on recognizing static patterns. Each small variation of known attacks required a new signature, and although thousands of these new signatures were constantly added, it still allowed existing attacks to go right through undetected, while at the same time generating numerous false positives.

This static model is outdated, ineffective, expensive to setup and maintain, and it doesn't scale, a typical example of a bad idea.

Especially when it's positioned as an easy-to-setup, easy-to-use, out-of-the-box, deploy and forget solution - because it's not.

In rule-based log correlation, each attack scenario needs to be precisely modeled, and then a set of rules needs to be defined to defend against this attack. Any small variation, any minute difference with this scenario, will require different rules, in a different order, with exceptions and extensions.

How many attack scenarios exist out there?

Is this a model that scales?

Ask your favorite SaaS and/or MSSP how many correlation rules they manage for their clients, and they'll give you figures in the tens of thousands.

I know large MSSPs that manage 60,000 correlation rules, others 80,000 and more. And there is no guarantee that an attack will be stopped. Yet there are plenty false positives.

Is your organization ready to manage this number of correlation rules?

Let's see why so many rules are required, and you'll understand that behind the general "good idea" principal you are in fact betting on an approach that is doomed for failure.

About Gorka Sadowski
Gorka is a natural born entrepreneur with a deep understanding of Technology, IT Security and how these create value in the Marketplace. He is today offering innovative European startups the opportunity to benefit from the Silicon Valley ecosystem accelerators. Gorka spent the last 20 years initiating, building and growing businesses that provide technology solutions to the Industry. From General Manager Spain, Italy and Portugal for LogLogic, defining Next Generation Log Management and Security Forensics, to Director Unisys France, bringing Cloud Security service offerings to the market, from Director of Emerging Technologies at NetScreen, defining Next Generation Firewall, to Director of Performance Engineering at INS, removing WAN and Internet bottlenecks, Gorka has always been involved in innovative Technology and IT Security solutions, creating successful Business Units within established Groups and helping launch breakthrough startups such as KOLA Kids OnLine America, a social network for safe computing for children, SourceFire, a leading network security solution provider, or Ibixis, a boutique European business accelerator.



ADS BY GOOGLE
Subscribe to the World's Most Powerful Newsletters

ADS BY GOOGLE

With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices h...
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, m...
As you know, enterprise IT conversation over the past year have often centered upon the open-source ...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple ho...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily ...
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to micros...
Kubernetes is an open source system for automating deployment, scaling, and management of containeri...
IT professionals are also embracing the reality of Serverless architectures, which are critical to d...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will ...
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it,...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every ...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with exp...
Today most companies are adopting or evaluating container technology - Docker in particular - to spe...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually ne...
Using serverless computing has a number of obvious benefits over traditional application infrastruct...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with exp...
When a company wants to develop an application, it must worry about many aspects: selecting the infr...
At its core DevOps is all about collaboration. The lines of communication must be opened and it take...
Modern software design has fundamentally changed how we manage applications, causing many to turn to...
In an age of borderless networks, security for the cloud and security for the corporate network can ...