Whether you’ve spent your career in cyber security on the vendor/provider side or the enterprise side of the table, you’ve no doubt participated in the circus that is the ‘evaluation’. Whether you’re the buyer trying to make a smart purchase, or the seller trying to make a smart sale, the evaluation is a fact of life.
That said, evaluations are one of the most difficult parts of the role. I often refer to the process using a term borrowed from Ben Kepes, a “goat rodeo”. Evaluations come in four stages from my experience. Stage 1 is the definition of the problem to be solved. Stage 2 is defining the criteria. Stage 3 is executing testing. Stage 4 is determining the outcome. I’ll give a brief overview of each stage here and set up for a five-part series starting with this article that will detail the challenges and strategies for overcoming them.
Evaluation Stage 1 – Definition of the Problem to be Solved
Many technology and service evaluations perish before they get out of the starting blocks. The main cause, every time, is poor definition of the problem to solve. If you have the word “better” in your problem statement you’re likely going down this path. I’m going to assume you’ve written a problem statement. If you haven’t, that is step #1.
A problem statement must identify what’s deficient, why you believe it’s deficient, and by how much it needs to improve. Be specific! A problem statement should also provide concrete evidence of the deficiency. If you believe that your current SIEM is insufficient for your organization’s needs, you should know why. Perhaps it’s unable to keep up with the volume of events that feed into it for analysis. Perhaps it’s missing some critical ability to parse or ingest data. Perhaps it is not interoperable with a new system or component your organization depends on. Identify the deficiency, provide concrete evidence as the problem statement.
Stage 2 – Definition of Success Criteria
One you have the problem clearly defined, you’ll need to clearly lay out what “better” means. Because better is left to individual interpretation, concrete success criteria are important in order to be able to evaluate successfully. Concrete, clear success criteria should have binary responses. The tool or service you’re evaluating to determine if it solves your problem statement should provide binary (yes/no) answers to your success criteria.
When developing success criteria, we as developers of the criteria are mentally pulled to develop a scale to evaluate on. We’re mentally drawn to develop “good, better, best” answers to success criteria. This is a poor choice in my experience. There are those who say that if you use “good, better, best” you can have a smaller number of success criteria and still create clear separation for an evaluation. I will admit this sounds good, but remember that “good, better, best” is subjective. Two people can argue whether a particular solution performs an evaluation criteria better than another – but it’s subjective.
I believe that having a reasonable number of success criteria with binary answers is the most effective way to go.
Stage 3 – Execution of Testing
There are entire guides and books written on this subject, so I will clearly not do it justice in a few paragraphs. Executing the testing against success criteria to decide whether something solves for the problem statement is critical – but difficult. The reason is simple. Whether you’re evaluating a SIEM or anti-malware endpoint tool, the environment you’ll be testing it in is rarely representative of where it will ultimately be deployed. It really is that simple.
I remember when my team helped customers evaluate dynamic application security testing tools. Every vendor would run their scanner against their own demo site. Not surprisingly, the tool found every vulnerability. But when run against a real application, the results varied. Some tools were better designed to focus on Java applications, while others were best in .Net environments. Asking them to be great at ‘everything’ is a fools’ errand.
I’m reminded of the quote, “It is unreasonable to judge a fish by it’s ability to climb a tree”. This seems intuitive, and yet so many evaluations go smoothly until the product or service enters into production. And then suddenly nothing works, and we’re disappointed. There are several more issues with execution of testing, but this is clearly the most glaring. I’ll address others in a follow-up article.
Stage 4 – Determining the Outcome
You’ve defined a concrete problem statement.
You’ve defined binary success criteria.
You’ve tested effectively.
Now it’s time to pick a winner. Sounds easy, right?
It’s not. The reason for this is that sometimes there are draws. Sometimes, we don’t have a clear winner that meets all of our success criteria. Then what? You must prepare for these two outcomes. If there is a draw (tie) between contestants you have to have additional means to pick a winner. Maybe further criteria or an added step in the process. If none of the evaluated contestants meet the success criteria fully you must be ready to decide whether the evaluation is a failure, or whether you’re willing to accept less than ideal results. That’s a big decision too – deciding what “good enough” looks like on paper is critical to your outcome.
In short, you’re facing a significant challenge. If you want to have a good outcome to your evaluation you have to knock down 4 pretty significant challenges in phases. It’s not to say that you’re doomed, but rather that you should be careful, purposeful, and meticulous.
In the next few articles I’ll outline each of these four phases in more detail and provide some anecdotes from my experience. My intent is to have you, the reader, learn from mistakes I’ve made and lessons I’ve learned, so you get the benefit of my experience. If this is useful, I encourage you to share and provide feedback in the Comments Section at the bottom of this article. Also, I’m @Wh1t3Rabbit on Twitter, so leave a comment or find me there.
Rafal Los serves as the VP of Solution Strategy at Armor. He’s responsible for leading the various technical functions associated with designing, developing and delivering next-generation cloud security-as-a-service solutions to our clients. Rafal is also the Founder & Producer of the Down the Security Rabbithole Podcast. He previously worked as the Managing Director, Solution & Program Insight at Optiv Inc.; Principal, Strategy Security Services at HP Enterprise Security Services; and Senior Security Strategist at HP Software.
As an IT security professional, Rafal gained experience in some of the world’s most challenging business environments. His responsibilities included budgets, risk analysis, process creating and adoption, internal audit and compliance strategies. His professional experience has taken him from budding “.com” companies, to a security boutique shop, to one of the world’s largest and most complex enterprises – always meeting challenges head-on and with a positive attitude. He has been the catalyst for change in many organizations, building bridges across enterprises and developing permanent successful strategies for growth and prosperity.industryinfoseclos