I need a better pen.
That statement may mean something dramatically different depending on who just said the words. In some cases, like with me, it means I want more consistent ink and a body that fits comfortably in my hand for longer periods of time. To you that may mean something different.
In the introductory article of this series, “The Evaluation – Four Phases to Finding “Better” Solutions“, the foundation was laid with general descriptions of the four phases. This month’s entry goes a little deeper into Stage 1, the definition of the problem to be solved. The word “definition” itself means the condition of being definite, distinct, or clearly outlined1. I couldn’t have said it better myself.
The Mindset Going In
The point is, “better” means something different to everyone, and if you don’t specifically define what it means to you, odds are good that you will be disappointed with the result. The same applies projects and technology evaluations. Ask yourself, and be honest, how many times you’ve said that you need a better widget or whatever. When you look back at that definition of “better”, is it the same today as it was then? I’m going to venture a guess and say no.
So, when you look at the technology in your organization, or some process you’re looking to update or replace, assume that you’re not allowed to simply say “better”. Think about concrete, real ways that you can describe the state your technology is in today and the delta that needs to be closed. Whether you’re looking to replace your SIEM, your endpoint tool, or vulnerability management platform, you must define where the deficiency is.
First, define a current state. Identify that there is an issue using definitive terms and language while also providing proof. So many times we rush to find something “better” when what we have is adequate for the purpose it serves. Upgraded products (like the iPhone for example) are a prime example of new and improved without a defined purpose. We look to go out and purchase the “new” purely for the purpose of having something new. New is always better, right? The answer is maybe, maybe not.
Strangely, based on the mountain of RFPs and RFIs I’ve had the opportunity to work on over the years, defining the “future state” is difficult. And where the future desired state is poorly defined, the rest of the process falls pretty flat on its face. So, let’s dig into how you can avoid words like “better”, and run your evaluation based on data not gut feeling. But even before we address the current state or our desired future state, there’s a very important device to use to keep you on track.
The Parts of the Definition of the Problem
I bring you to Part 1 of Stage 1: The Mission Statement. It’s interesting to think about how many endpoint security products there are. Each one claims to be better than the rest and certainly better than previous versions. But the one that is “better” is likely so, because it’s better for you. It meets some criteria you’ve set up front even before evaluating. You will need to set a mission statement that sets forth the overarching purpose of the entire evaluation. Define the purpose to demonstrate why the current technology is deficient and what the future state is. Keep your mission statement visible when you’re working on the next parts as a constant reminder to keep your eye on the prize.
Part 2, identifying a current issue or deficiency (or multiple I suppose), and Part 3, deciding on a future state for each one, are separate exercises but can be done simultaneously. This is difficult, so ground yourself in reality. Define tasks or features that bring value to your organization that the technology you currently have does not IE, an endpoint security tool that doesn’t support a version of operating system your company is switching to. There is a clear deficiency and now an evaluation criteria. The issue arises when vendors play the “better feature” game. You’ve heard how vendors tell you that their product detects 100% of all known and unknown attacks, where clearly your current technology can not. The fact is proving that is very difficult without some contrived process that will ultimately showcase the thing you’re trying to prove. We’ll cover that in a future post.
When identifying specific deficiencies, the magic number seems to be somewhere between 3 and 7 criteria for requirements evaluation. Also, odd numbers are better (3, 5, 7), because they leave less room for a “tie”. So list out those deficiencies and take a pass at the list and make sure each is concrete, objective, and binary. Let’s take a quick look at each of those three requirements.
Concrete means it can be clearly understood easily with as much qualifying detail as possible. For example, a yellow #2 pencil that is pre-sharpened with a whole eraser and no shorter than 5 inches in length. That is concrete. To illustrate the point, a bad example of the same requirement would be “A yellow pencil.”
Objective is the second test the requirement must pass. If two people evaluate a technology for your criteria, they must reach the same conclusion based on fact not feelings. This is difficult from a requirements-setting perspective but absolutely critical. Your evaluation can’t be based on feelings and personal opinion.
Binary decisions are the best kind. If I was looking for that yellow #2 pencil that is pre-sharpened with a whole eraser and no shorter than 5” in length, and you gave me one that had a used-up eraser, anyone could look and say with certainty that “no” this did not meet the requirement. You never want to put yourself in a position where the requirement is “sort of” met or “mostly met”. This is why you should endeavor not to assign scales for evaluation. What I mean is, assigning a 1-10 scale (1 worst, 10 best) to an evaluation is a poor choice from my years of experience.
In the End…
Some say that the pen is mightier than the sword. Others may disagree. But by coming together and having a definition of the problem and a clear goal of where we want to be, maybe it becomes obvious that what we really need is a combination of the two (and a digital clock)! But you’ll never know until you actually do it.
Now that you have your overall mission statement, identified an odd number of your most crucial issues and listed them with both current and future states with detailed criteria, you’re ready to move to the next step. That’s in the next article, The Evaluation: Stage 2 – Definition of Success Criteria. Stay tuned!
Rafal Los serves as the VP of Solution Strategy at Armor. He’s responsible for leading the various technical functions associated with designing, developing and delivering next-generation cloud security-as-a-service solutions to our clients. Rafal is also the Founder & Producer of the Down the Security Rabbithole Podcast. He previously worked as the Managing Director, Solution & Program Insight at Optiv Inc.; Principal, Strategy Security Services at HP Enterprise Security Services; and Senior Security Strategist at HP Software.
As an IT security professional, Rafal gained experience in some of the world’s most challenging business environments. His responsibilities included budgets, risk analysis, process creating and adoption, internal audit and compliance strategies. His professional experience has taken him from budding “.com” companies, to a security boutique shop, to one of the world’s largest and most complex enterprises – always meeting challenges head-on and with a positive attitude. He has been the catalyst for change in many organizations, building bridges across enterprises and developing permanent successful strategies for growth and prosperity.2018 highlight industry infosec los rfp