A Practical Approach to Threat Modeling

ecurity is a hot topic these days. It is as if developers and system designers are fighting a never ending war against those who desire to damage hardware, compromise system availability, steal data, and tarnish hard-earned client trust. And as if malicious threats weren’t enough, we must also protect ourselves from unintentional damages inflicted by accidental removal or modification of data.

The scope of this effort ranges from entire enterprise networks and the Internet itself down to a specific line of code that may handle the formatting of a string. For the benefit of this article, the entirety of this scope will be described as a “system.”

Some tactics can be employed to secure a system without much analysis, such as implementing a firewall on the network, implementing logins to restrict system access, employing role-based security to control which aspects of the system a user can access, and encrypting sensitive data such as social security numbers. The question is: How can one be objectively confident when making the claim that their system is secure? The methodology commonly referred to as “threat modeling” organizes the review and analysis process to ensure the security of a system.

The sample system used in this article to illustrate the aspects of threat modeling is a simplified version of a web application. The public uses this system to browse a library of music CDs and request those items for short term lending, much like one that might be utilized for a public library.

Defining Threat Modeling
Threat modeling is a formal process that identifies assets and their security vulnerabilities, and analyzes and documents them. The output from this process is not a static document, but one that should be continually revised as new elements are introduced to a system or existing elements are modified.

There are several approaches to the threat modeling process. Some approaches may be better for large IT shops or enterprise-wide evaluation, while others are more suited for very small development shops or very limited-use systems. You should evaluate these approaches and modify the threat modeling methodology to the specific needs of your particular environment. This article presents the threat modeling methodology that I employ, which was originally inspired by the methodology presented by the Microsoft Application Consulting and Engineering Team, but contains some variations pulled from other sources.

It is said that a picture is worth a thousand words. It is also true that a well-crafted quote can encompass volumes of documents and articles. In the case of threat modeling, Sun Tzu, the sixth century B.C. author of “The Art of War,” best encompasses the threat modeling effort in the following quote: “?if you know your enemies and know yourself, you will fight without danger in battles?”

Another good piece of advice is to start early. Building threat modeling into a system during the development process is optimal, because the data entry and output points are defined during that stage, giving you the opportunity to evaluate the proposed foundation for potential vulnerabilities before any construction begins. Unfortunately, the optimal scenario is relatively rare. Threat modeling of an existing system and creating mitigations to identified vulnerabilities post-implementation offers a different set of challenges. Still, a late threat model is much better than no threat model at all.

Assembling a Threat Modeling Team
The threat modeling process should have a designated person to facilitate the threat modeling process and assembling the documentation. System evaluation is not a one-person job; it typically involves many participants, such as:

  • System Architects and Developers: because they are the most intimately familiar with the structure and coding of the system.
  • Network Administrators: because they are most familiar with the environment in which the system operates.
  • End Users: because they have a grasp of how the system gets used on a daily basis.
  • Testers: to execute the findings and evaluate how any mitigation of identified vulnerabilities affects system operation.
  • Decision Makers (Managers): because they are the ones who will determine how the system is intended to be used as well as which vulnerabilities should be mitigated based upon their risk appetite.

In some development shops there may be persons who take on multiple roles; but the inclusion of other viewpoints, especially during the threat analysis portion of the process, is essential to identifying a system’s vulnerabilities.

Here are the steps that must be performed to fully understand a system:

  1. Define the assets of the system
  2. Define user entities
  3. Define trust levels and boundaries
  4. Identify input/output points of the system
  5. Develop use case scenarios
A Note on Documentation: It is critical to document the results. For easy reference, you should document each element with the following key pieces of information:

  • ID: This should be a unique identifier which can be referenced in other textural and graphical documentation.
  • Description: This will provide the details regarding the step in question.
  • Step Specific Details: This provides information that is specific to the step being evaluated. For example: Evaluating an asset should include the details regarding its consideration.

Note that this documentation contains important information regarding the system and—in the wrong hands—is itself a security vulnerability; therefore, keep it in a safe and restricted location in both electronic (soft copy) as well as printed (hard copy) format.

A Threat Modeling Walkthrough
The rest of this article walks you through the threat modeling process for the sample CD Music Library web application, following the five-step process listed in the previous section.

Defining Assets
An asset is an item of value, an item security efforts are designed to protect. It is usually the destruction or acquisition of assets that drives malicious intent. Obviously, assets have varying values. A collection of credit card numbers is a high-value asset, while a database that contains candy store inventory is probably a lower-value asset. The higher the asset value, the more effort an adversary will expend to gain access to it; therefore, the more resources it costs to protect.

Consider these factors when evaluating an asset:

  • Confidentiality: If compromised, does the asset present a threat of confidentiality? An example would be a database that contains client tax identification or credit card information. If an adversary were to gain access to this information, the consequences could be devastating—not only for the holders of this data but also for their clients.
  • Integrity: If compromised, does the asset present a threat of integrity? An example would be data that provides information in which the client is billed or even data that provides information in which a company makes strategic decisions.
  • Availability: If compromised, does the asset present a threat of availability? An example would be a database server that contains information in which employees are granted access to specific rooms in a building. If the server was not available to determine that the appropriate employees were allowed into the manufacturing shop, there would be a loss of untold volumes of income.

Applying the asset-definition process to the sample music CD library system resulted in identification of the assets shown in Table 1.

Table 1. Music CD Library Assets: The table shows the reasoning behind the sample system asset identification on three measures.
ID Description Confidentiality Integrity Availability
1.0 Member Data Data contains identification code as well as contact information and address information. Data is used to identify the client within the system as well as grant access to the system. If data were unavailable, the member could not login, view the music CD library, or use any feature of the system.
2.0 Music CD Inventory Data   Data is used to identify the music CDs that are potentially available for lending. If data were unavailable, memberscould not request a specific music CD.
3.0 Lending Status Data   Data is used to determine whether a music CD is in the warehouse or in the possession of a member. If data were unavailable, the systemcould not determine the location of a specific music CD.
4.0 Late Fee/Lost CD Payment History Data contains payment method information such as checking account and/or credit card account numbers. Data is used to determine whether a member has a late/lost fee and whether it has been paid. If data were unavailable, payment couldnot be made.

User Entities
User entities are the entities that legitimately interact with a system. These entities could be actual end users such as a system administrators, data entry users, and anonymous users. In addition, database connections used to interface with other systems are considered to be user entities. Table 2 shows the various entities identified for the sample system.

Trust Levels and Boundaries
Trust levels define the minimal access granted to a user entity within a system. For example, a system administrator role may have a trust level that allows them to modify files in a certain directory on a file server, while another user entity may be restricted from modifying files.

Trust boundaries define the location where the trust level changes for a user entity. An example of a trust boundary might be an incoming data validation subsystem. After incoming data has passed validation, the system can elevate its trust level and store it in the database. Table 2 shows the user entities and trust levels for the sample system.

Table 2. Sample Music CD Library System Trust Levels: The threat modeling process identified these user entities and trust levels.
ID Description Use Trust Level
1.0 System Administrator Accesses the system to perform maintenance activities, setting modifications and issue resolution. Full access to all features and settings of the system.
2.0 Library Manager Manages the library inventory, check-in, check-out, and receives payment of late/lost fees. Access to member data, music CD library data, lending status data, and payment history data.
3.0 Member Browses inventory and requests CDs for lending. Also views/maintains personalmember data. Read-only access to music CD library data and payment history data. Canmodify rights to their specific member data.
4.0 Anonymous User Accesses the system to review membership policies and signup to become a member. Can only submit a membership request, view membership policy screen, and access member login screen.

Input/Output Points
Input points are points where user entities and data enter a system. Output points are points where user entities and data exit a system. While you may not need to define trust boundaries for all input/output points, defining them during the threat modeling process is beneficial because it defines the scope of the system. All activity that occurs beyond these points may be addressed by a separate threat modeling process.

An example of an input point would be a user entity that gains access to a system’s authentication screen through a web browser. The authentication screen is where the system will learn the user entity’s identity and grant the appropriate trust levels to the user entity—which by definition is a trust boundary. Note that the security of the web browser itself is beyond the control of the system.

An example of an output point would be an export process for a database, such as SQL Server Integration Services. The export may generate a text file containing client data in a directory on the file server for consumption by another system. Because the text file is located outside of the scope of the system being modeled it is considered an output point. Table 3 lists the input/output points identified for the sample system.

Table 3. Input/Output Points: The threat modeling process identified these input/output points for the sample music CD library system.
ID Description Input / Output
1.0 Web browser used to gain access to the features of the application. Input
2.0 System Administrator tools to gain access to the system features. For example: SQL Server Management Studio Input
3.0 Printed list of music CD requests physically pulled from the warehouse. Output
4.0 Printed receipt given to member when music CD is picked up. Output

Identifying the user entities, trust levels, boundaries, and input/output points clearly defines access to and use of the system’s assets.

Use Case Scenarios
Use case scenarios are often presented as a part of the development process of a system. They depict the context in which the system is to be used and how end users interact with the system. In threat modeling, use cases are valuable for testing vulnerability mitigation as well as identifying possible avenues for system security penetration.

The documentation of the system’s input and output point can certainly aid in the development and authoring of use case scenarios. Table 4 lists some use case scenarios for the sample system.

Table 4. Use Case Scenarios: Use case scenarios aid in testing vulnerability mitigation and help identify possible avenues for system security penetration.
ID Description
1.0 An anonymous user accesses the system through a web browser. The user does not have a member id. Afterreviewing the membership policies, the user accesses the member sign-up screen,populating all required fields and submitting the information. At that point, the user exits the system to obtain a member IDsent by the system to the user-specified e-mail address.
2.0 An anonymous user accesses the system through aweb browser. The user, having a member ID, accesses the login screen and enters themember id and password. The login is successful. The user navigates to the music CD library and enters a partial band or artist name in the search textbox and clicks the Submit button. The systemfinds CDs that contain the entered value in the band or artist’s name,and presents those to the user.

Creating and Using Data Flow Diagrams
Data flow diagrams are just one of many options that help people grasp the data flow through a system. Taking the time to prepare these diagrams will save time in the long run, because they quickly communicate all the aspects previously discussed. To prepare a data flow diagram you approach the system hierarchically. A “context diagram” expresses the entire data flow of a system at the highest level; to explore specific processes of a system, you would use “lower level” diagrams.

 
Figure 1. Context Diagram for Sample Music CD Library System: Included in this diagram are the trust boundaries of the system indicated with dotted lines.

Figure 1 shows a context diagram for the sample system of a music CD library. The diagram identifies each user entity along with the trust boundaries over which they communicate with the system. This context diagram represents the system at its highest level; the double-lined circle indicates that there are multiple processes involved.

Know Your Adversary
War is not something usually considered a desirable state, but it is the reality when it comes to protecting a system’s valuable assets. There are many types of adversaries out in the world, all of whom want to either steal those assets or render them useless. An effective defense requires understanding the system’s adversaries. There are three types of adversaries:

  • Informed Adversary: This is an adversary that either is given or has obtained information that gives them an advantage when approaching the system. An example of this information would be a login/password or a data diagram that describes the relationships of the tables that make up the database.
  • Uninformed Adversary: This is an adversary that possesses no information regarding the system and is attempting to compromise the system through random attacks. An example of this type of adversary who employs a network sniffer to capture traffic that may reveal information that will aid the attack.
  • Accidental Adversary: This is an adversary who exploits a vulnerability of a system without malicious intent. Often this will be a valid user entity that runs across a flaw in the system or interfaces with the system in an unexpected way. An example of this type of adversary would be a valid user who, while entering data, accidentally updates the incorrect record compromising the integrity of the data.

Viewing the system from the point of view of these types of adversaries greatly aids in the threat modeling effort.

Threat Trees
Threat trees are a hierarchical method for exploring system threats and vulnerabilities. Their organization aids in the process of understanding how an attack may be executed and how any resulting identified vulnerabilities may be mitigated. You can document threat trees using either graphical flow charts or simple outlines.

Reviewing the asset listing aids in the development of threat trees.

The process of developing threat trees begins with the identification of the root threats to the system, then identifying sub-threats, which detail exactly how an adversary may execute the root threat. The level below the sub-threat is known as the atomic threat. An atomic threat is the specific step that facilitates the successful execution of the sub-threat. Finally, the lowest level in a threat tree is the threat vector. This level identifies the vulnerabilities that allow the previous levels to be executed.

Here’s a very limited threat tree that addresses the member data asset for the sample music CD library system. The root threat is that an attacker might be able to view confidential member data. The sub-threats, atomic threats, and threat vectors identify how an attacker might gain such access:

Threat Tree Example

1.

View confidential member data

 

1.1

Valid member login/password is spoofed.

 

 

1.1.1

Capture valid user keystrokes.

 

 

 

1.1.1.1

Anti-Spyware does not exist on server.

 

 

1.1.2

Member login/password easily determined.

 

 

 

1.1.2.1

Strong passwords are not enforced.

 

 

 

1.1.2.2

System allows unlimited password attempts.

 

 

 

1.1.2.3

Forgotten password provides a generic temporary password.

 

 

1.1.3

Member login/password is shared with another person.

 

 

 

1.1.3.1

System does not force a new password after a period of time.

By studying this threat tree, you can see how the various levels work together to define threats.

Identify Vulnerabilities Using STRIDE
STRIDE is an acronym for a process developed by the Microsoft Application Consulting and Engineering Team to represent various methods by which an adversary may attack a system. STRIDE encapsulates:

  • Spoofing Identity: In this attack, adversaries falsely represent themselves as valid user entities. For example, having obtained the login of a system administrator, the attacker gains access to system data, giving them free rein to execute further attacks.
  • Tampering with Data: Using this method of attack, an adversary successfully modifies or deletes data within the system. An example would be when an adversary gains access to the system database and deletes all the client records.
  • Repudiation: This method identifies whether or not an adversary can attack a system without detection or evidence that the attack occurred. An example would be an adversary who performs a “tampering with data” attack without leaving any trail indicating that the data had been compromised.
  • Information Disclosure: In this attack method, an adversary gains access to data not within their trust level. Such data may include system information that may facilitate further attacks.
  • Denial of Service: Using this method of attack, an adversary causes a system to be unavailable for valid user entities. An example would be an adversary who executes a shutdown command to a file server.
  • Elevation of Privilege: This type of attack increases the adversary’s system trust level, permitting additional attacks. An example would be an adversary who enters a system as an anonymous user entity but is able to obtain the trust level of a system administrator.

Understanding these methods guides the process of threat analysis and helps identify potential system vulnerabilities. Table 5 shows the STRIDE assessment for the three root threats identified for the member data asset.

Table 5. STRIDE Assessment: The table shows the STRIDE Assessment for three root threats to the sample system.
ID Description S T R I D E
1.0 View confidential member data. X   X X    
2.0 Manipulate member data.   X     X X
3.0 Render member data unavailable.         X  

Rating Threats with DREAD
Another anachronism developed by the Microsoft Application Consulting and Engineering Team, called DREAD, provides a means to rate threats identified through STRIDE and threat trees. DREAD stands for:

  • Damage Potential: Defines the amount of potential damage that an attack may cause if successfully executed.
  • Reproducibility: Defines the ease in which the attack can be executed and repeated.
  • Exploitability: Defines the skill level and resources required to successfully execute an attack.
  • Affected Users: Defines the number of valid user entities affected if the attack is successfully executed.
  • Discoverability: Defines how quickly and easily an occurrence of an attack can be identified.

You should keep the rating system simple. For example, a rating scale ranging from one to three would be effective. For example, a Damage Potential rating of one would indicate that if the specific attack were successfully executed that the resulting damage would be minimal.

While these ratings are somewhat subjective, they should be based upon the experience of the Threat Modeling Team members as well as the careful evaluation of the threat trees that have been assembled. The value of rating threats allows the Threat Modeling Team to determine the risk associated with them as well as the priority of their mitigation.

Table 6 shows the DREAD ratings for the three root threats identified in the sample music CD library system.

Table 6. DREAD Ratings: Using a three-point scale, here are the DREAD ratings for the sample music CD Library system.
ID Description D R E A D Total
1.0 View confidential member data. 1 1 1 1 1 5
2.0 Manipulate member data. 3 1 1 1 2 8
3.0 Render member data unavailable. 2 1 1 3 3 10

As identified in the “Total” column, root threat 3.0 represents the highest total DREAD rating; therefore it should be atop the priority list for mitigation.

Determining Risk Appetite
Mitigating all threats and vulnerabilities—even assuming they can be identified—can be very expensive both in monetary and human resources. In some cases, the current system architecture may make mitigation impossible without a complete re-design. It is at this point that the evaluation of the risk appetite of the client comes into the picture.

The considerations for determining client risk appetite are:

  • The value of the asset being evaluated: For example, a million-dollar solution that mitigates a threat to a database containing a list of baseball cards does not make good business sense; while the same mitigation for a database that contains critical information for a patented medication is certainly reasonable.
  • The cost of mitigation compared to the potential loss if an attack is successful: This is very similar to the previous consideration although it approaches the question from a return on investment perspective. If the potential loss is reasonably less than the cost of mitigation then it may be worth taking on the risk rather than mitigating the threat.
  • The likelihood of an attack: If executing a threat requires a high level of expertise and a large investment in materials, it may be worth taking on this risk.
  • DREAD priority ratings: A low DREAD rating indicates that the risk for an attack is fairly low and may be worth accepting.

Final Documentation
The very act of producing threat trees will result in either graphical or textural documentation of that process. The documentation of the STRIDE and DREAD analysis is equally important in communicating the threat model of the system in question. A recommended approach for documenting the STRIDE and DREAD analysis would be to pull the root threats from the threat tree and place them on a spreadsheet with columns for each element of STRIDE and DREAD—much like Table 5 and Table 6—identifying each element as it applies to the root threat.

A summary document (see Table 7) is also recommended to encompass all the threat analysis results. Elements captured in this summary document should include:

  • ID: This should be a unique identifier which can be referenced in other textural and graphical documentation.
  • Name: This is the root threat from the threat tree that describes the item being evaluated.
  • STRIDE Elements: This lists the full description of the STRIDE elements that apply to the item being evaluated.
  • DREAD Rating: This lists the DREAD ratings for the item being evaluated.
  • Threat Tree: This provides either the graphical or textural representation of the item being evaluated. The documentation often begins at the sub-threat level since the item being evaluated is the root threat.
  • Mitigation: This provides either the action(s) taken or the recommended action(s) to be taken to eliminate the threat.
  • Risk Appetite: If the threat is to be left unmitigated, documenting the risk appetite evaluation is valuable.
Table 7. Summary Document: For the sample music CD library system, here’s one possible summary document format:
ID 1.0
Name View Confidential Member Data
STRIDE Denial of Service
DREAD Rating Damage Potential: 1 of 3
Reproducibility: 1 of 3
Exploitability: 1 of 3
Affected Users: 1 of 3
Discoverability: 1 of 3
Threat Tree Insert threat tree here, or make document reference.
Mitigation Insert mitigation details, or make document reference.
Risk Appetite Insert risk appetite details, or make document reference.

After following the threat modeling process described in this article, you’ll have completed a formal review process that identifies and evaluates system vulnerabilities. Knowing how an adversary might attempt to attack a system is critical to building a strong defense. Identifying vulnerabilities during the development stage of the system is always the most opportune; but you can perform the threat modeling process any time during the system’s lifecycle.

Share the Post:
Share on facebook
Share on twitter
Share on linkedin

More From DevX