When I was at MAD Security, our VP of Professional Services, Cliff Neve had a saying: "It's the dumb stuff." What Cliff usually meant when he said it was that it's the simplest, most obvious concepts that are likely the cause of most of our pain.
And nowhere in security do I find more causes of muddled thinking and confusion than the concepts of vulnerability and threat.
I was reminded of this during a conversation I had over the weekend with a long-time security professional, who said:
"We're phishing our users so that we can reduce the number of threats against them."
I hear this kind of confsion of logical type often when I'm talking to even the best security pros. And few of them realize the importance of the mistake they're making. But, going back to our CISSP (definitions here):
Vulnerability — A flaw, loophole, oversight, or error that can be exploited to violate system security policy.
Threat — A natural or man-made event that could have some type of negative impact on the organization.
The other key definition is that of "attack" or "exploit":
Attack (or Exploit) - The act by which a threat takes advantage of a vulnerability.
These all combine in my favorite all time risk equation (sometimes mis-attributed to Ira Winkler):
Risk = ((Threat * Vulnerability) / Countermeasures) * Value
When you consider the statement made by my friend, it becomes obvious: phishing testing doesn't reduce the threat against your users, it reduces their vulnerability to attacks by those threats.
And while it may seem that I'm just debating semantics, I've huge consequences from security professionals making this type mistakes when assessing risk because of their mixing and matching of these three terms.
Consider this hypothetical situation: I build an unpatched Windows 2000 machine running all default services. I then place that in a concrete bunker, unplug the machine, and turn off the power. (Call this my "Staging" environment)
If I run a risk assessment against that machine, the risk is likely some number approaching 0. But why is that the case? Think back to the risk equation above - we have taken insanely large numbers of countermeasures and placed the device in a location with no threats. But the device itself is hugely vulnerable when taken out of the context of those countermeasures.
Now, suppose I roll the machine out of staging on to the internet... where there are many threats and far fewer countermeasures.
This may sound obvious, but ask yourself: how many times have I seen this happen? I know that I personally have had at least three vehement arguments with clients in the last 5 years that came down to this situation: they were equating a lack of threats (or a preponderance of them) as an indication of a lack of vulnerability (or a preponderance of it.
And they made poor evaluations of what they should do, entirely because they were mixing up the contribution of each of these distinctly different items to their assessment of risk.