top of page

The #1 Reason Your Last Pentest Sucked (and how to fix it).

"Conducting a Penetration Test without a Threat Model is like going grocery shopping  without a shopping list and buying the first 20 items you see..."  - Chris Ream, Founder/CEO

A penetration test is designed to discover weaknesses attackers could leverage to bypass security controls in your application or infrastructure. Unfortunately, most penetration tests 

penetration testing, uninformed by a threat model, is that it ignores the value of the data you're trying to protect and fails to identify the type of attacker, their skill level, and likely objective.

 

No matter how long a penetration testing engagement lasts, it still has a start and end date. Because of this constraint, it is unlikey that a penetration testing team will reveal every risk present; in fact, that's not usually the goal.

 

Generally, a penetration test shows the most apparent potential attack vectors and rates them according to severity. Additionally, the number of penetration testers on an engagement and their cumulative knowledge and experience are finite quantities that create asymmetry compared to the nearly unlimited time and skill of all threat actors interested in compromising your resources.

 

For the reasons stated above, conducting a penetration test without first generating a  threat model is like going grocery shopping without a list and buying the first 20 items you see. Sure, you may come home with some items that you could prepare a meal with, but more often than not, most of the things you bring home would have little value for that purpose.

 

Similarly, although conducting a penetration test without a threat model will yield some valuable findings, more often than not, those findings do not correlate with the goals or methods of threat actors. This dramatically reduces the value of an uninformed penetration test.

It may be no surprise that not all threat models are created equally. Most threat modeling exercises focus on how data transits from one security boundary to another but fail to consider the value of the data, the type of attacker, their skill level, and their likely objective. Why are these factors so important? Because knowing your data and your enemy allows you to make better decisions regarding where to focus your security initiatives to offset the inherent asymmetry of defending your resources.

 

This is why Leviathan Security Group has pioneered an entirely novel approach to threat modeling that focuses on the most important aspects of defensive engagements. The method is known as the TAGTEAM Methodology for Scientific Threat Modeling, and it's turning out to be a real game-changer! Why? Because TAGTEAM is based on concepts of advanced game theory to produce actionable threat models based on actual scientific data instead of guesswork. In fact, the acronym TAGTEAM stands for "The Advanced Game Theory Evaluation and Assessment Methodology."

At the heart of the TAGTEAM Methodology of Scientific Threat Modeling is the fundamental concept that threat actors are rational agents (meaning skilled, intelligent attackers) who develop strategies to attack your resources based on their opportunities, beliefs, and preferences. This places threat actors in the category of an "opponent" engaged in the "game" of attacking your infrastructure.

 

In game theory, the terms "opportunities," "beliefs," and "preferences" have strict meanings that help data scientists generate a game matrix. For a threat actor, their opportunities represent their entire set of possible actions, similar to all the possible opening moves in a chess match. For example, performing a sweep of your network presents an opportunity to an attacker for the simple reason that you expose IP Addresses on the internet. An example of a non-opportunity for a threat actor would be to launch an attack against a non-existent webserver.

 

A threat actor's beliefs represent the assumptions they make about your environment. Beliefs are not always based on observable facts but can also be inferred. For example, a threat actor may believe that your organization employs a firewall by assuming that you likely adhere to industry best practices. They need not be able to detect a firewall to believe one exists.

 

A threat actor's preferences are strongly influenced by the value judgments that constitute their motivation. For example, the preferences of targets and methods of attack will vary significantly from a threat actor motivated by financial gain instead of one motivated by notoriety. Furthermore, the value of the data your organization handles greatly influences the preferences and thus the types of threat actors you may expect to see attacking your network. One would not necessarily expect a financially motivated threat actor to launch a sophisticated attack against a WiKi site that does not handle financial transactions.

 

By using the TAGTEAM Methodology for Scientific Threat Modeling, we can assign a quantitative value to each of the three choice determinants (opportunities, beliefs, preferences) for each type of threat actor and map it to the value of your data. For example, if your organization processes financial transactions, a financially motivated threat actor might receive a value of 2 or 3, whereas a threat actor motivated only by knowledge might receive 0 or 1. Why? Because for a threat actor motivated by knowledge, the risk of being caught and imprisoned far outweighs the value of the knowledge they may gain from learning to compromise your network. Conversely, the potential gain of millions of dollars far outweighs the risk they assume by being caught for the financially motivated attacker.

 

The critical point to keep in mind is that game theory allows us to generate accurate probability metrics that inform our threat model with actual data, thus removing the guesswork from the equation.

 

More importantly, a threat model derived from scientific analysis allows your organization to identify and focus on the areas of greatest concern while minimizing the asynchronous nature of defense engineering. Moreover, a scientifically backed threat model can act as a shopping list to increase the effectiveness of threat actor simulation and penetration testing.

bottom of page