Measuring the effectiveness of security initiatives, while complicated, is required in order for a security team to get better at protecting the organization. Without a feedback loop, teams are merely throwing darts in the dark, unable to tell whether or not they ever hit the target. This blog post will outline a method that can be used to estimate the potential value of a proposed security control, and then extend that method to evaluate the efficacy of a deception program.
Estimating the value of a security control
As noted in blog post two, organizations have a finite amount of resources it can dedicate towards protecting itself; similarly, attackers have a finite amount of resources it can dedicate towards compromising an organization. It follows that individual security controls can be evaluated based on the equation below:
The greater the ratio of attacker cost to organizational cost, the better! This formula looks simple, but is simultaneously incredibly powerful and difficult to get right. The power of the formula lies in its ability to distill an attacker’s resources into one number. Infrastructure (computing power, networks, etc), exploits, and person-hours all have an associated cost to the attacker. The downside to the power of this formula is the difficulty involved in estimating an attacker’s cost, and how much a security control will affect it.
Preventative security controls usually directly increase the cost for an attacker to succeed in compromising an organization. As an example, a change to the password complexity requirements for employees of an organization might increase the time for an attacker to crack an employee’s password hash from one day to five days. In this example one day’s worth of an attacker’s time is worth $1,000 and renting one day’s worth of infrastructure is $500. Meanwhile, the increase in password complexity is expected to cost the organization $4,000 in lost person-hours while the IT team helps employees change their passwords. The security impact of this change is shown below:
Five minus one represents the additional days spent by the attacker performing their attack, which is then multiplied by their daily cost. That result is then divided by the cost to the organization to get 1.5. This change seems like an effective one; for every $1.00 spent by the organization making this change, an attacker would have to spend $1.50.
Controls that increase the likelihood of detecting an attacker, affect the attacker’s cost by changing the probability that they perform an entire attack without being detected. As an example, adding a decoy credential to employees computers might result in the chances of detecting an attacker who has gained access to an employee’s computer being 20%. In this example, the cost of the attacker cracking the employee’s password is the same; $1,000 for the day of work and $500 for the rented computer. The cost to the organization to deploy the credentials is $1,000. The expected increase in cost to an attacker is again shown below:
One divided by (one minus two tenths) represents the increased number of attacks the attacker would need to perform in order to perform one without being detected (for every attack they perform, they will be detected in two tenths of them, leaving eight tenths undetected - thus one divided by eight tenths). The more formal way of determining this fraction is to divide one by the subtraction of the percent detection chance as a fraction from one.
This is multiplied by their daily cost, and divided by the cost to the organization, resulting in 1.38. While not as impressive as the password change policy, this does increase the attacker’s cost by $1.38 while only costing the organization $1.00.
The catch
The process above can be great for determining which security controls are worth investing in, but there is a catch - it requires the security team to know how much the chances of detecting an attacker are if that control is put into place. If the last example were used again but with a 30% chance instead of 20%, the story is very different. Plugging in the numbers again below yields:
At 20% detection rate, the organization saw increasing the password complexity as the more worthwhile investment, but at 30%, decoy credentials on workstations are the more attractive option. This means that in order to properly evaluate a control, a close approximation of the likelihood of detecting an attacker is vital.
The solution
In an ideal world, an organization is not compromised so frequently that it can - with reasonable accuracy - estimate the likelihood of detecting an attacker’s presence with different controls before the attacker has done their damage. Assuming that an organization is not in such an unsavory position, they can hire professionals to perform (simulated) attacks so that they can better understand the chances of certain controls catching an attacker without suffering the damage that a real attacker would inflict. The solution to the catch in determining the value of a security control is offensive security (for the purposes of this post, “red teaming”, “pen(etration) testing”, “purple teaming”, and “adversary emulation”, which are all related in different ways to “offensive security”, will all be avoided due to the often misleading twists that marketing has put on the phrases).
Offensive security is the practice of performing a subset of the steps an attacker would take if they were to compromise an environment. At the very least, offensive security leaves out the part where actual damage is done, otherwise it would defeat the purpose of the exercise. Often times, offensive security also leaves out the initial access to the environment. An organization will do a little bit of hand waving and assume that by some unknown means, an attacker has gotten access to something that they should not be able to; from there, the attack continues.
Offensive security can be used to test the effectiveness of a deception program in two ways:
- Offensive operator(s) has no knowledge that deceptive resources are in use
- Offensive operator(s) know that deceptive resources exist somewhere but don’t know what or where
Each of these options have strengths and benefits, so if possible, both should be employed. When offensive operators do not know that deception is used, it accurately reflects a hypothetical scenario where an attacker unaware that the organization uses deception is able to compromise it. It helps the organization better estimate the chances of detecting an attacker in its security posture (whether they are deception or “traditional” controls). It is, however, possible that a more skilled attacker researches its target in advance and is aware that it uses deception as part of its security program. In more accurately reflecting this hypothetical scenario, running an exercise where the operators know that deception is in play can be beneficial. As an additional benefit of more precisely determining the likelihood of catching a real attacker, using offensive operators who are aware of deception allows the organization to see if it changes the behavior of the attacker. An attacker aware that deception is in play might tread more carefully in the environment they find themselves in, in essence tarpitting themselves for fear of being caught.
Putting it all together
With a tool to estimate the value of a security control (the formula) and a method with which to approximate the cost increase a security control imposes on an attacker (offensive security), all that is left is to combine the pieces.
First, perform offensive security exercises against the environment to understand the baseline security posture. The idea at this stage is to understand how much effort the attacker needs to expend in order to achieve their goal. If it is possible to estimate the cost of a successful compromise of a particular area (e.g. Active Directory, the cloud environment, an employees workstation, etc) in dollars, that is great; if not, call the cost “X” and move on to step two.
Next, make changes to the environment. A past post on this blog outlined a possible first year’s roadmap for a new deception team, but any changes that the team sees fit should suffice.
After making changes to the environment - hopefully resulting in a more secure environment - it is time to run more offensive exercises to measure the change. Run tests on the same areas of the environment and compare the results. Using the formula discussed earlier:
This formula can actually be simplified a bit and rearranged to make it easier to use. Applying some algebra to move terms around results in the following:
If, after filling in all of the variables, the right side of the equation is greater than the left side, the team has made an effective use of its resources! If not, it may be worth trying to figure out why the money invested in certain security controls are not yielding the desired results. Two final notes on the testing method outlined above that are worth calling out:
- The more tests of a given environment an organization performs, the more accurate the detection rates in the equations will be. The organization should be careful to not jump to conclusions based on the results of one or two tests.
- While offensive security tests of the environment are the most realistic (safe) way to estimate the detection rates, the organization can use data from real attacks it has suffered, and similar attacks that other organizations have suffered in order to increase its sample size.
Wrapping up
The cadence an organization chooses to run offensive security exercises will be influenced greatly by a number of factors, but the most impactful is whether or not it has the capability in-house. If not, it may choose to engage with one or more third party vendors who specialize in attacks relevant to what the organization aims to protect. Regardless, organizations should run offensive exercises to refine its understanding of its security posture, and track (hopefully) improvements over time by showing that the amount of money required to succeed in attacking the organization is increasing. Testing new scenarios can tell an organization whether or not it is prepared for an attack it hasn’t faced before. Testing the same scenarios at regular intervals can tell a security team if it is improving, or, as will be the topic of next month’s post, highlight opportunities for more deception.