Most Read This Week
From the Blogosphere
Why Rule-Based Log Correlation Is Almost a Good Idea - Part 2
Modeling attack scenarios? Is it possible?
By: Gorka Sadowski
Dec. 12, 2011 07:00 AM
Rule-based log correlation is based on modeling attack scenarios
"By managing all your logs you get universal visibility in everything that is happening in your IT infrastructure." Yes, this is a true statement.
But to tell that you can easily flag security attacks using rule-based correlation is a major overstatement.
Rule-based correlation essentially automates the "If this is happening here" and "That is happening there" then "We have a problem." More precisely, "If this precise event is taking place at this particular time in this specific device" and "That precise event is taking place at that particular time in that specific device" then "We may have a problem." Of course, you can set a time window (not too wide as we'll see later) and you can specify a group of devices (not too many as we'll also see later), but in the core of the engine it will get translated to plenty of single specific rules.
Rule-based log correlation automates the inference of information by looking at several sets of logs. Not a bad idea from a theoretical standpoint, but far, far from the "end all, be all" to security touted by many vendors as a "plug and play," easy-to-use low TCO solution.
We first need to start by modeling a specific attack scenario, by understanding the different steps involved in this attack, and then by programming rules to synthetize these steps.
Once this phase is done and only after it is properly done, and as we'll see below it is not an easy task, then you can put together a set of correlation rules to automate the decision of whether or not an attack is taking place. And ring an alert if required.
Simple enough, right?
At a high level, yes.
But think about it. Are attacks deterministic in nature? No. So trying to model an attack as a series of discrete steps will just not work.
An outdated, static approach that doesn't scale
This static model is outdated, ineffective, expensive to setup and maintain, and it doesn't scale, a typical example of a bad idea.
Especially when it's positioned as an easy-to-setup, easy-to-use, out-of-the-box, deploy and forget solution - because it's not.
In rule-based log correlation, each attack scenario needs to be precisely modeled, and then a set of rules needs to be defined to defend against this attack. Any small variation, any minute difference with this scenario, will require different rules, in a different order, with exceptions and extensions.
How many attack scenarios exist out there?
Is this a model that scales?
Ask your favorite SaaS and/or MSSP how many correlation rules they manage for their clients, and they'll give you figures in the tens of thousands.
I know large MSSPs that manage 60,000 correlation rules, others 80,000 and more. And there is no guarantee that an attack will be stopped. Yet there are plenty false positives.
Is your organization ready to manage this number of correlation rules?
Let's see why so many rules are required, and you'll understand that behind the general "good idea" principal you are in fact betting on an approach that is doomed for failure.
Subscribe to the World's Most Powerful Newsletters