Monday, February 19, 2018
Tuesday, February 6, 2018
Monday, December 18, 2017
Threat Modeling
What is a Threat?
In information security, a threat is any actor or event that can cause potential harm to an
information system asset.
In information security, a threat is any actor or event that can cause potential harm to an
information system asset.
information system asset.
Overview of Threat Modeling
Developing a threat model is the process of mapping the specific, unique threats to your
organization and the methods used to attack any information technology asset or collection of assets.
The two primary goals of threat modeling are:
- Provide a clear perspective of assets, threats, and possible attacks to facilitate discussions regarding risk management decisions and practices
- Discover and evaluate gaps in security controls at the application, system, infrastructure, and enterprise levels
The concept of conducting threat modeling exercises has been around for as long as distributed
information systems have been used to process data. Since the inception of the idea there
have been various methodologies that solve a specific problem, but may not scale to an enterprise level,
are not applicable outside of the Software Development Lifecycle (SDLC), or are not repeatable.
An effective threat modeling process that addresses these issues and can be applied to both
information technology operations and software development. The Threat Model reflects the fact
that different technology teams face different threats. Our model can be tailored to individual
stakeholders throughout an organization to reflect their areas of responsibility. This capability allows for
the entire organization to work in concert to evaluate the threats to the enterprise and develop strategies
to address those risks.
Developing a threat model is the process of mapping the specific, unique threats to your
organization and the methods used to attack any information technology asset or collection of assets.
organization and the methods used to attack any information technology asset or collection of assets.
The two primary goals of threat modeling are:
- Provide a clear perspective of assets, threats, and possible attacks to facilitate discussions regarding risk management decisions and practices
- Discover and evaluate gaps in security controls at the application, system, infrastructure, and enterprise levels
The concept of conducting threat modeling exercises has been around for as long as distributed
information systems have been used to process data. Since the inception of the idea there
have been various methodologies that solve a specific problem, but may not scale to an enterprise level,
are not applicable outside of the Software Development Lifecycle (SDLC), or are not repeatable.
information systems have been used to process data. Since the inception of the idea there
have been various methodologies that solve a specific problem, but may not scale to an enterprise level,
are not applicable outside of the Software Development Lifecycle (SDLC), or are not repeatable.
An effective threat modeling process that addresses these issues and can be applied to both
information technology operations and software development. The Threat Model reflects the fact
that different technology teams face different threats. Our model can be tailored to individual
stakeholders throughout an organization to reflect their areas of responsibility. This capability allows for
the entire organization to work in concert to evaluate the threats to the enterprise and develop strategies
to address those risks.
information technology operations and software development. The Threat Model reflects the fact
that different technology teams face different threats. Our model can be tailored to individual
stakeholders throughout an organization to reflect their areas of responsibility. This capability allows for
the entire organization to work in concert to evaluate the threats to the enterprise and develop strategies
to address those risks.
Asset Analysis
Threat models must begin with the identification of the most critical assets. This is known as the
Crown Jewel Analysis. Your organization's mission is dependent on the confidentiality, integrity,
and availability of these assets. These assets must be protected and have their risk exposure limited.
By understanding what is critical to your organization, we can identify the dependencies and the threats
you face.
Assets include two major elements:
1. Business Assets, which are data, components, or functionality that are essential for the business
mission of the system.
2. Security Assets, or data, components, or functionality that are of special interest to an attacker.
They may not always be the same.
Threat models must begin with the identification of the most critical assets. This is known as the
Crown Jewel Analysis. Your organization's mission is dependent on the confidentiality, integrity,
and availability of these assets. These assets must be protected and have their risk exposure limited.
By understanding what is critical to your organization, we can identify the dependencies and the threats
you face.
Crown Jewel Analysis. Your organization's mission is dependent on the confidentiality, integrity,
and availability of these assets. These assets must be protected and have their risk exposure limited.
By understanding what is critical to your organization, we can identify the dependencies and the threats
you face.
Assets include two major elements:
1. Business Assets, which are data, components, or functionality that are essential for the business
mission of the system.
mission of the system.
2. Security Assets, or data, components, or functionality that are of special interest to an attacker.
They may not always be the same.
They may not always be the same.
Define the Attack Surface
The next step is to create a comprehensive map of the components of the application, system, or
environment that contain, communicate with, or otherwise provide some form of access to the assets.
The communication flows between the assets and the components are integral to determining the
attack surface. The attack surface will help define the boundaries, scope, roles and responsibilities
in the threat model.
Information including devices, interfaces, libraries, protocols, functions, and APIs is collected and
used to complete the picture of the attack surface. Existing security controls and services are captured
to outline their effectiveness.
The next step is to create a comprehensive map of the components of the application, system, or
environment that contain, communicate with, or otherwise provide some form of access to the assets.
The communication flows between the assets and the components are integral to determining the
attack surface. The attack surface will help define the boundaries, scope, roles and responsibilities
in the threat model.
environment that contain, communicate with, or otherwise provide some form of access to the assets.
The communication flows between the assets and the components are integral to determining the
attack surface. The attack surface will help define the boundaries, scope, roles and responsibilities
in the threat model.
Information including devices, interfaces, libraries, protocols, functions, and APIs is collected and
used to complete the picture of the attack surface. Existing security controls and services are captured
to outline their effectiveness.
used to complete the picture of the attack surface. Existing security controls and services are captured
to outline their effectiveness.
Mapping Threats and Attacks
Threat mapping begins with determining the sources of attack and their motivation. Disgruntled
employees, state actors, and random script kiddies are all examples of potential threats to your system.
Each threat actor can have different skillsets, resources, and objectives and must be accounted for when
developing the model.
Documentation of the attack surface provides the source material of the next phase: mapping the paths
of attack. Through our understanding of the system components and functionality, we are able to
envision attacker tools and techniques applied to abuse the system. The attack surface depicts the
pathways of an attacker and allows visualization of multiple attack methods.
Threat and attack mapping is a sophisticated skill. It requires an understanding of an attacker’s
mindset and deep knowledge of attack methodologies.
Threat mapping begins with determining the sources of attack and their motivation. Disgruntled
employees, state actors, and random script kiddies are all examples of potential threats to your system.
Each threat actor can have different skillsets, resources, and objectives and must be accounted for when
developing the model.
employees, state actors, and random script kiddies are all examples of potential threats to your system.
Each threat actor can have different skillsets, resources, and objectives and must be accounted for when
developing the model.
Documentation of the attack surface provides the source material of the next phase: mapping the paths
of attack. Through our understanding of the system components and functionality, we are able to
envision attacker tools and techniques applied to abuse the system. The attack surface depicts the
pathways of an attacker and allows visualization of multiple attack methods.
of attack. Through our understanding of the system components and functionality, we are able to
envision attacker tools and techniques applied to abuse the system. The attack surface depicts the
pathways of an attacker and allows visualization of multiple attack methods.
Threat and attack mapping is a sophisticated skill. It requires an understanding of an attacker’s
mindset and deep knowledge of attack methodologies.
mindset and deep knowledge of attack methodologies.
Threat Analysis
After completing discovery of the system and detailing threat actors, comes the analysis phase, in
which the risk of each attack vector is quantified in a manner that allows stakeholders to understand
the potential for real damage to your organization.
The results of the analysis phase allow your organization to make decisions that maximize the
effectiveness of the security devices (such as firewalls, intrusion detection systems, and spam filters)
and procedures that mitigate threats and attacks. The DREAD Method is a simple, extensible model
that allows for comparing and ranking risks in an easy-to-understand manner.
Damage - How bad would an attack be?
Reproducibility - How easy is it to reproduce the attack?
Exploitability - How much work is it to launch the attack?
Affected Users - How many people will be impacted?
Discoverability - How easy is it to discover the threat?
Each category is assigned a value between 0 and 10, 0 reflecting no risk/damage, while 10 is
maximum risk/damage. The DREAD formula is:
Risk = (D + R + E + A + D) / 5
The values that are derived by DREAD allow your organization to focus its energy on the
most vulnerable portion of your information systems and prioritize your efforts on implementing
controls to reduce risk.
After completing discovery of the system and detailing threat actors, comes the analysis phase, in
which the risk of each attack vector is quantified in a manner that allows stakeholders to understand
the potential for real damage to your organization.
which the risk of each attack vector is quantified in a manner that allows stakeholders to understand
the potential for real damage to your organization.
The results of the analysis phase allow your organization to make decisions that maximize the
effectiveness of the security devices (such as firewalls, intrusion detection systems, and spam filters)
and procedures that mitigate threats and attacks. The DREAD Method is a simple, extensible model
that allows for comparing and ranking risks in an easy-to-understand manner.
effectiveness of the security devices (such as firewalls, intrusion detection systems, and spam filters)
and procedures that mitigate threats and attacks. The DREAD Method is a simple, extensible model
that allows for comparing and ranking risks in an easy-to-understand manner.
Damage - How bad would an attack be?
Reproducibility - How easy is it to reproduce the attack?
Exploitability - How much work is it to launch the attack?
Affected Users - How many people will be impacted?
Discoverability - How easy is it to discover the threat?
Each category is assigned a value between 0 and 10, 0 reflecting no risk/damage, while 10 is
maximum risk/damage. The DREAD formula is:
maximum risk/damage. The DREAD formula is:
Risk = (D + R + E + A + D) / 5
The values that are derived by DREAD allow your organization to focus its energy on the
most vulnerable portion of your information systems and prioritize your efforts on implementing
controls to reduce risk.
most vulnerable portion of your information systems and prioritize your efforts on implementing
controls to reduce risk.
Risk = Probabilty x Impact
Effective Defense
The goal of threat modeling is to select the proper controls to address identified threats.
System and software designers often choose security controls from a well-known best practice list,
such as antivirus software, firewalls, input validation, etc. However, the implementation of controls
without a threat model can lead to security holes since not all threats have been directly addressed.
Even the best practice controls, if configured generically, do not address the unique threats faced by
each organization's unique environment.
Without appropriate threat modeling, security controls and procedures can be ineffective because they
do not address the unique threats facing the organization. This approach to threat modeling uncovers
any technological, process, or organizational gaps in security controls and allows for enhanced risk
management practices that align to the mission of your organization.
The goal of threat modeling is to select the proper controls to address identified threats.
System and software designers often choose security controls from a well-known best practice list,
such as antivirus software, firewalls, input validation, etc. However, the implementation of controls
without a threat model can lead to security holes since not all threats have been directly addressed.
Even the best practice controls, if configured generically, do not address the unique threats faced by
each organization's unique environment.
System and software designers often choose security controls from a well-known best practice list,
such as antivirus software, firewalls, input validation, etc. However, the implementation of controls
without a threat model can lead to security holes since not all threats have been directly addressed.
Even the best practice controls, if configured generically, do not address the unique threats faced by
each organization's unique environment.
Without appropriate threat modeling, security controls and procedures can be ineffective because they
do not address the unique threats facing the organization. This approach to threat modeling uncovers
any technological, process, or organizational gaps in security controls and allows for enhanced risk
management practices that align to the mission of your organization.
do not address the unique threats facing the organization. This approach to threat modeling uncovers
any technological, process, or organizational gaps in security controls and allows for enhanced risk
management practices that align to the mission of your organization.
Sunday, November 26, 2017
Effective Tabletop Exercise Design
I used to hold the position that when an organization decides to execute a table top exercise it was because they were mature to the point of having an incident response plan. The organization utilizes the table top to test the execution IRP and identify any gaps. I mostly thought the exercise was to affirm that the controls and processes in place were adequate.
I was mistaken.
Often the exercise is meant to shine a spotlight on how unprepared a company is if an incident were to occur. A tabletop exercise could be to demonstrate to management that in the event of a breach they don't know who is responsible for what, what the proper response should be, when to involve legal council, when to involve law enforcement, etc.
Table top exercises are an effective tool for measuring an organization's capability of responding to an incident throughout all maturity levels.
This point highlights the importance of having a clearly defined objective before heading into a tabletop exercise. Objectives should align with past, current, and future capabilities. The objectives enable tabletop planners to design the scenario to be inclusive of all stakeholders and test the cohesiveness of supports systems in response to potential cyber attacks.
The next step should be to develop the team who will design the exercise. This could be an outsourced resource who can objectively review policies and procedures and design an exercise based on provided documentation. The design team could also be internal resources familiar with the processes in-house. The objectives should drive defining the personnel required to design scenarios. The designers should not be a part of the exercise itself. The designers should also be unencumbered from pressures of the results of the exercise (political or otherwise). The design team should be allocated anywhere between 1 to 3 months to adequately plan the exercise.
The design team will flush out all the participants in the exercise. It is key for a table top team to understand the audience of the exercise. It has to tailor its language to be inclusive. The exercise must also speak to the environment of the organization. Specific tools they use, understanding of security concepts, native acronyms, are among the things that may need to be accounted for when designing the exercise. The participants should all be able to understand and track the scenario as it is executed during the exercise.
In order to build an effective table top exercise the design team must account for these common issues with table top execution:
Now that your team is built, the objectives defined, and the audience identified, the designers will be able to begin building the scenario. Designers must build realistic events that could occur that impacts the organization, partners, and stakeholders. The scenario will be closely tied to the objectives of the exercise. The scenario is comprised of scripted events that are meant to facilitate discussions between the different groups involved in the exercise. The scripts should allow for improvisation so simple modifications and injects (new data for participants to consider) are available.
Now all that's left is the execution and after action reports. It is necessary to have someone whose only function is to take a copious amount of notes to build the report. Candor is a necessity of any table top exercise. It is meant to identify any gaps in the organization and help develop plans to rectify any findings. It must be communicated that the outcome of the exercise is meant to make all parties involved better. The event is not meant to shame but to uplift by informing and building.
I was mistaken.
Often the exercise is meant to shine a spotlight on how unprepared a company is if an incident were to occur. A tabletop exercise could be to demonstrate to management that in the event of a breach they don't know who is responsible for what, what the proper response should be, when to involve legal council, when to involve law enforcement, etc.
Table top exercises are an effective tool for measuring an organization's capability of responding to an incident throughout all maturity levels.
This point highlights the importance of having a clearly defined objective before heading into a tabletop exercise. Objectives should align with past, current, and future capabilities. The objectives enable tabletop planners to design the scenario to be inclusive of all stakeholders and test the cohesiveness of supports systems in response to potential cyber attacks.
The next step should be to develop the team who will design the exercise. This could be an outsourced resource who can objectively review policies and procedures and design an exercise based on provided documentation. The design team could also be internal resources familiar with the processes in-house. The objectives should drive defining the personnel required to design scenarios. The designers should not be a part of the exercise itself. The designers should also be unencumbered from pressures of the results of the exercise (political or otherwise). The design team should be allocated anywhere between 1 to 3 months to adequately plan the exercise.
The design team will flush out all the participants in the exercise. It is key for a table top team to understand the audience of the exercise. It has to tailor its language to be inclusive. The exercise must also speak to the environment of the organization. Specific tools they use, understanding of security concepts, native acronyms, are among the things that may need to be accounted for when designing the exercise. The participants should all be able to understand and track the scenario as it is executed during the exercise.
In order to build an effective table top exercise the design team must account for these common issues with table top execution:
- Cyber scenario objectives not clearly defined.
- Rules of engagement not clearly defined.
- Reduced awareness due to Senior leaders not involved in planning.
- Cyber injects are not executed as planned.
- Training audience fights against scenario
- Account for logistical/technical issues during execution
Now that your team is built, the objectives defined, and the audience identified, the designers will be able to begin building the scenario. Designers must build realistic events that could occur that impacts the organization, partners, and stakeholders. The scenario will be closely tied to the objectives of the exercise. The scenario is comprised of scripted events that are meant to facilitate discussions between the different groups involved in the exercise. The scripts should allow for improvisation so simple modifications and injects (new data for participants to consider) are available.
Now all that's left is the execution and after action reports. It is necessary to have someone whose only function is to take a copious amount of notes to build the report. Candor is a necessity of any table top exercise. It is meant to identify any gaps in the organization and help develop plans to rectify any findings. It must be communicated that the outcome of the exercise is meant to make all parties involved better. The event is not meant to shame but to uplift by informing and building.
Saturday, February 4, 2017
Defending the top 5 ways attackers own your network
Recently the Cyber security firm Praetorian published a report that details the top 5 attack methods used by the firm over 100 separate penetration test engagements. The report was based on what are the tools and techniques used by attackers once they already have a foothold within the network. This premise is based on the fact that it is near impossible to insure that all employees will not click on a bad link emailed to them that may run code or steal their credentials. The report show the most common ways they went from getting that user to click a bad link to owning the entire network.
The top 5 methods in order were:
So the bad news is that these security holes have been around for years. The good news is that these security holes have been around for years. This is good news because the ways to mitigate these issues are readily available and do not require the acquisition of some additional software or security appliance.
Weak Domain User Passwords
A lot of organizations feel safe because they have followed what has become common place in thinking what a secure password is. "Well my domain is set to require 8 characters, 1 special character, 1 capital letter, and 1 number as a password." Sadly, this does not make for a secure complex password. In order to satisfy these requirements users will commonly use passwords like P@ssword123 and Winter!2016. I don't believe that anyone would consider either of these examples of secure passwords. Organizations need move away from passwords towards passphrases. Where Winter!2016 is considered weak !LoveTheWinter!0f2016 is orders of magnitude more difficult to crack simply because of the increased number of characters used. Passphrases are easy to remember and provide enough security to thwart most attempts to crack.
Additional guidance is that when possible two factor authentication should be implemented especially for administrative and remote access. An organization with less strict password rules has a dramatic net positive impact when complimented with a second factor of authentication.
Broadcast Name Resolution Poisoning
Broadcast Name Resolution Poisoning attacks leverage how systems attempt to find other systems on the network to steal credentials. If a system looks for a system that is neither set in the local hosts file or in DNS looks to NetBIOS/LLMNR for answers. NetBIOS/LLMNR broadcasts traffic across the network to search for a system. Because this is broadcast traffic all systems see it and all systems can respond. An attacker can leverage this function to gather credentials that can either be cracked offline or replayed to other systems to increase network access.
Most organizations have no business need for NetBIOS/LLMNR. The guidance is to disable this and populate the DNS servers with entries for the enterprise systems. Web proxy auto discovery (WPAD) operates similarly to NetBIOS/LLMNR. This function should also be disabled within web browsers. An organization can also choose to forward WPAD traffic to an internal proxy that is controlled.
Pass the Hash
A lot of organizations do not know how to properly manage the local administrator password on the many client systems across a network. Often they have the same username and password across each of the systems because that makes for ease of administration. Unfortunately, if an attacker gains access to one system and is able to compromise the password hash they are then able to have administrative access to all systems that use that account without the need for cracking the hash.
To mitigate the exposure to pass the hash attacks, organizations should look to apply a defense in depth type approach. First is to restrict the ability Domain and Enterprise administrators to login workstations. This way the credentials will never be on the system to be stolen. Another technique is to remove the ability of workstations to initiate inbound connections to other workstations. In general there should be no reason for client to client communications to occur. Only trusted administrative network segments should be allowed to login remotely. Later versions of windows allow you to remove the ability of storing credentials in local databases.
Microsoft also has released the Local Administrator Password Solution (LAPS) that generates a random password for each local administrator account. That password is then stored in Active Directory with the computer object. Domain administrators can then grant permissions to certain users to read the password to perform administrative functions.
Clear text passwords stored in memory
Mimikatz is a popular attack tool used to steal cleartext passwords from the LSASS process in windows. If an attacker is able to obtain administrative or system level privileges, usernames and passwords can be pulled directly from memory.
Later versions of windows have resolved this issue by default but older versions must have been patched with KB 2871997 and have modified the registry to set HKLM\SYSTEM\CurrentControlSet\Control SecuityProviders\Wdigest UseLogonCredential REG_DWORD to 0. This should be considered a high value registry key so it should be monitored to make sure it hasn't been changed.
Insufficient Network Access Controls
This attack vector was touched on in the pass the hash mitigation strategy. Attackers often have free reign on a network once they get a foothold. They are able to touch other client systems as well as all of the critical systems due to a lack of network access controls that segregate systems. The network should be restricted in such a way that systems should only be able to talk to each other if there is a business need to do so.
Organizations often grasp the concept of having a DMZ and segmenting their network into trusted zones in regards to untrusted traffic coming in from the outside. The same logic should be applied internally. Client systems should be barely trusted because while administrators do have some control of the system, the end user may have engaged in some bad behavior that lead to the compromise of the system that has yet to be detected. The defender must think in terms of limiting the damage that system can do as much as possible while not significantly impeding the end user from completing daily tasks.
To accomplish this network administrators must work with business units to identify what are critical systems and understand what personnel should and should not have access to.
The top 5 methods in order were:
- Weak Domain User Passwords
- Broadcast Name Resolution Poisoning (aka WPAD)
- Local Administrator Attacks (aka Pass the Hash)
- Cleartext Passwords Stored in Memory (aka Mimikatz)
- Insufficient Network Access Controls
So the bad news is that these security holes have been around for years. The good news is that these security holes have been around for years. This is good news because the ways to mitigate these issues are readily available and do not require the acquisition of some additional software or security appliance.
Weak Domain User Passwords
A lot of organizations feel safe because they have followed what has become common place in thinking what a secure password is. "Well my domain is set to require 8 characters, 1 special character, 1 capital letter, and 1 number as a password." Sadly, this does not make for a secure complex password. In order to satisfy these requirements users will commonly use passwords like P@ssword123 and Winter!2016. I don't believe that anyone would consider either of these examples of secure passwords. Organizations need move away from passwords towards passphrases. Where Winter!2016 is considered weak !LoveTheWinter!0f2016 is orders of magnitude more difficult to crack simply because of the increased number of characters used. Passphrases are easy to remember and provide enough security to thwart most attempts to crack.
Additional guidance is that when possible two factor authentication should be implemented especially for administrative and remote access. An organization with less strict password rules has a dramatic net positive impact when complimented with a second factor of authentication.
Broadcast Name Resolution Poisoning
Broadcast Name Resolution Poisoning attacks leverage how systems attempt to find other systems on the network to steal credentials. If a system looks for a system that is neither set in the local hosts file or in DNS looks to NetBIOS/LLMNR for answers. NetBIOS/LLMNR broadcasts traffic across the network to search for a system. Because this is broadcast traffic all systems see it and all systems can respond. An attacker can leverage this function to gather credentials that can either be cracked offline or replayed to other systems to increase network access.
Most organizations have no business need for NetBIOS/LLMNR. The guidance is to disable this and populate the DNS servers with entries for the enterprise systems. Web proxy auto discovery (WPAD) operates similarly to NetBIOS/LLMNR. This function should also be disabled within web browsers. An organization can also choose to forward WPAD traffic to an internal proxy that is controlled.
Pass the Hash
A lot of organizations do not know how to properly manage the local administrator password on the many client systems across a network. Often they have the same username and password across each of the systems because that makes for ease of administration. Unfortunately, if an attacker gains access to one system and is able to compromise the password hash they are then able to have administrative access to all systems that use that account without the need for cracking the hash.
To mitigate the exposure to pass the hash attacks, organizations should look to apply a defense in depth type approach. First is to restrict the ability Domain and Enterprise administrators to login workstations. This way the credentials will never be on the system to be stolen. Another technique is to remove the ability of workstations to initiate inbound connections to other workstations. In general there should be no reason for client to client communications to occur. Only trusted administrative network segments should be allowed to login remotely. Later versions of windows allow you to remove the ability of storing credentials in local databases.
Microsoft also has released the Local Administrator Password Solution (LAPS) that generates a random password for each local administrator account. That password is then stored in Active Directory with the computer object. Domain administrators can then grant permissions to certain users to read the password to perform administrative functions.
Clear text passwords stored in memory
Mimikatz is a popular attack tool used to steal cleartext passwords from the LSASS process in windows. If an attacker is able to obtain administrative or system level privileges, usernames and passwords can be pulled directly from memory.
Later versions of windows have resolved this issue by default but older versions must have been patched with KB 2871997 and have modified the registry to set HKLM\SYSTEM\CurrentControlSet\Control SecuityProviders\Wdigest UseLogonCredential REG_DWORD to 0. This should be considered a high value registry key so it should be monitored to make sure it hasn't been changed.
Insufficient Network Access Controls
This attack vector was touched on in the pass the hash mitigation strategy. Attackers often have free reign on a network once they get a foothold. They are able to touch other client systems as well as all of the critical systems due to a lack of network access controls that segregate systems. The network should be restricted in such a way that systems should only be able to talk to each other if there is a business need to do so.
Organizations often grasp the concept of having a DMZ and segmenting their network into trusted zones in regards to untrusted traffic coming in from the outside. The same logic should be applied internally. Client systems should be barely trusted because while administrators do have some control of the system, the end user may have engaged in some bad behavior that lead to the compromise of the system that has yet to be detected. The defender must think in terms of limiting the damage that system can do as much as possible while not significantly impeding the end user from completing daily tasks.
To accomplish this network administrators must work with business units to identify what are critical systems and understand what personnel should and should not have access to.
Reference:
Wednesday, January 11, 2017
A world class SOC for any organization.
Carson Zimmerman has worked out what has become close to gospel in my approach to the design and implementation of an effective Security Operations Center. He outlines 10 strategies organizations should take to implement a SOC that demonstrate maximum effectiveness and reflect the characteristics of a world class security operations center. The 10 strategies are:
Strategy 1: Consolidate Computer Network Defense Under One Organization
Strategy 2: Achieve Balance Between Size and Agility
Strategy 3: Give the Security Operations Center the Authority to Do Its Job .
Strategy 4: Do a Few Things Well
Strategy 5: Favor Staff Quality over Quantity
Strategy 6: Maximize the Value of Technology Purchases
Strategy 7: Exercise Discrimination in the Data You Gather
Strategy 8: Protect the Security Operations Center Mission
Strategy 9: Be a Sophisticated Consumer and Producer of Cyber Threat Intelligence
Strategy 10: Stop. Think. Respond . . . Calmly
If there is any theme to my aphorisms it's that organizational goals should be plainly defined and understood before any technology purchases or acquisition of many staff members/contractors. Zimmerman outlines 5 strategies before mentioning the technologies used in a SOC. He makes it clear that the organization primarily must have bought in and defined a clear mission in order to be successful. Also it highlights that a SOC that operates at a high level does not necessarily need to be reserved exclusively for large organizations with matching budgets.
Strategy 1: Consolidate Computer Network Defense Under One Organization
Strategy 1 is in the appropriate place because quite often the struggles of the SOC occur because its capabilities are fractured within the organizations. Engineering occurs in a group that is not specific to the SOC. Tier 2 is often commingled with operations tier 2 who will have issues escalating incidents to the correct personnel. An example is Firewall tickets end up going to the SOC when it should go to network operations. In a perfect world the SOC has its own triage, Tier 2, engineering, administration, and analysis teams. In typical deployments I suggest that triage infrastructure be maintained and personnel trained on which incidents belong in which buckets. However it is imperative that the remaining functions be reserved exclusively for SOC specific activities. Internal processes, availability of resources, and effective support of the SOC mission are measurably improved with independence from other organization activities.
Strategy 2: Achieve Balance Between Size and Agility
Strategy 2 is meant to address the need for the SOC to be structured in a manner that best fits the organizational need. It needs to be sized appropriately so that it is able to effectively meet the demands that may be placed on it while having the ability to dynamically adjust to fluctuations in the environment and/or scenario. In this strategy, Zimmerman provides guidelines for the appropriate sizing and structure of a SOC depending on the organization size and footprint.
Strategy 3: Give the Security Operations Center the Authority to Do Its Job .
Strategy 3 should've been number 1 to me. Once the idea of a SOC has been signed off on, the first thing that should occur is the drafting of a charter and mission statement. Management should sign off on the documents granting them the appropriate authority to monitor systems, detect and resolve incidents throughout the enterprise, and execute its mission. The organizational structure should appropriate autonomy from other potential competing interests. Zimmerman provides suggestions to where the SOC functions should reside in an org chart. The recommendation I often make is that the SOC should be tightly coupled with the Network Operations Center (NOC) while independent. This has the benefit of supporting each other's activities while fostering a collaborative relationship.
Strategy 4: Do a Few Things Well
When writing the charter and mission statements, special care must be taken in defining the SOC capabilities. This will go a long way in managing expectations and budget. It will also provide focus in making sure you are able to deliver services effectively. These can be compliance driven or shaped by need to provide specific functions (vulnerability scans, application assessment, tool implementation, etc). While deciding on your capabilities, it is equally important to define metrics that are able to effectively show how well you are doing at providing the services. It is also useful to have an eye for the future on what services are on the horizon that could complement the current offerings as the SOC matures.
Strategy 5: Favor Staff Quality over Quantity
This strategy is all about the acquisition and retention of information security talent. There is much to be said about the talent gap in security. The need for information security professionals is far outpacing the number of individuals with the skill set to fill that need. Zimmerman states that your focus in staffing the SOC should be on acquiring the staff that is able to think creatively and outside of the box to solve problems. These types of individuals may be worth several people in that they find automated ways to sift through data to identify problems and solutions where others may toil for more time to arrive at the same solution. I tend to recommend that organizations look inside of their own organizations first to fill the talent gap. Information security is a hot field that people from all types of backgrounds are looking to get into. The problem is that it is often a field that is often difficult to break into and people do not know the steps they need to get into the arena. Organizations need to start enabling their people and searching for folks outside of the normal realms to find talent. What is often all that is necessary for good security professionals is enthusiasm. Feed that enthusiasm with opportunity. Having this mindset has the dual effect of learning to leverage the talent you have and nurturing it to grow into something bigger and better. It provides a positive work environment and bolsters retention.
Strategy 6: Maximize the Value of Technology Purchases
We've gone through people, processes, and goals before addressing the technology. This is very intentional. It helps you establish all the things you want out of your SOC before even thinking about the technology that can deliver these objectives. Many organizations make the mistake of shaping their organizational goals and processes around technology when it should be the polar opposite. By setting your goals around the needs of the constituents you are able to develop requirements that are unique to your organization. From there you are able to evaluate vendors against the specific needs you have defined. These needs may end up being filled with simple solutions or multiple complex technologies that have to interact with each other to secure your systems.
One thing that I harp on is that you cannot have a security program or solution at any level of maturity without having a full mapping of your systems and their expected behaviors. At the onset of Strategy 6, Zimmerman brings up that before you can operationalize your SOC, they must be provided with a database of system assets and a method to keep it updated. This database will heavily impact your technology choices and how/where they should be deployed.
Strategy 7: Exercise Discrimination in the Data You Gather
Strategy 7 goes hand in hand with Strategy 4. If you know what you do well then you should know what information you need to do it well. You can't do anything well if you are inundated with extraneous information. The SOC needs to be discriminate in the information it receives and processes. This allows for the freeing of resources to effectively perform its duties.
Zimmerman then goes on to provide the reader with best practices of where to obtain data and how to centralize your log processing out of band.
Strategy 8: Protect the Security Operations Center Mission
This strategy folds into Strategies 1 and 3. Whoever is running the SOC must be cognizant of its mission and prevent it from being pulled into conducting tasks that do not support that mission. Organizations tend to tap the SOC for operational or development needs due to the tendency of SOC personnel having skill sets that complement these arenas. Zimmerman's description of this strategy is that the SOC must have its own infrastructure, people, processes, and budget.
Strategy 9: Be a Sophisticated Consumer and Producer of Cyber Threat Intelligence
Cyber Threat Intelligence Feeds are a controversial topic in information security. Threat Intelligence is the process of examining an Advance Persistent Threat (APT) actor's activities and creating a fingerprint to help identify them. Criticism of Threat Intelligence feeds is that an APT's methods are typically unique to the organization they are trying to exploit. Therefore the fingerprint that is associated with one attack should not match a separate attack. So the effectiveness of threat intelligence feeds have been called into question.
I believe that a mature SOC is able to leverage these feeds in a way that help them identify a similar attack more quickly than they would without the information provided to them in a feed. Cyber Threat Intelligence feeds are not meant to be a magic bullet to help organizations in similar industries quickly detect a persistent threat actor. They are meant to share information to the community so everyone is better able to detect and thwart concerted efforts to undermine your security posture.
Zimmerman goes through the steps of how an organization can establish a Threat Intelligence operation that is internal to your organization. With an internal Threat Team, analyst will not have to ingest and process the contents to understand if it is relevant for their environment. The threat data is an analysis of external and internal data to develop signatures to automatically detect an ongoing attack. Cyber Threat Intelligence teams conduct malware analysis, digital forensics, and in depth network traffic analysis to observe how they have been compromised. This information is then feed back to the SOC team so they are able to detect similar threats in the future.
This is not for the faint of heart and is reserved for the most mature of information security programs.
Strategy 10: Stop. Think. Respond . . . Calmly
If you have taken care in previous strategies, the tenth should come relatively naturally. In the event of an incident, personnel should have been trained to follow their playbooks to detect, respond, and remediate. The correct communications have occurred, the correct actions to quarantine have taken place, correct methods to preserve evidence have been followed, and any after action lessons learned have been recorded. The best way to achieve the calm, measured response throughout the organization in the event of a compromise is to practice during seemingly tranquil times. This will indoctrinate the staff on what they should be doing when and also hammer out any inadequacies in the current processes.
If any organization is looking to build out SOC I would recommend they reference Zimmerman's Mitre paper as a place to begin. It gives all the details one would need in a successful SOC covering things like logistics, technology deployment strategies, staffing, and creating a mission to build from.
Zimmerman, Carson. 2014. Ten Strategies of a World-Class Cybersecurity Operations Center
https://www.mitre.org/publications/all/ten-strategies-of-a-world-class-cybersecurity-operations-center
Strategy 1: Consolidate Computer Network Defense Under One Organization
Strategy 2: Achieve Balance Between Size and Agility
Strategy 3: Give the Security Operations Center the Authority to Do Its Job .
Strategy 4: Do a Few Things Well
Strategy 5: Favor Staff Quality over Quantity
Strategy 6: Maximize the Value of Technology Purchases
Strategy 7: Exercise Discrimination in the Data You Gather
Strategy 8: Protect the Security Operations Center Mission
Strategy 9: Be a Sophisticated Consumer and Producer of Cyber Threat Intelligence
Strategy 10: Stop. Think. Respond . . . Calmly
If there is any theme to my aphorisms it's that organizational goals should be plainly defined and understood before any technology purchases or acquisition of many staff members/contractors. Zimmerman outlines 5 strategies before mentioning the technologies used in a SOC. He makes it clear that the organization primarily must have bought in and defined a clear mission in order to be successful. Also it highlights that a SOC that operates at a high level does not necessarily need to be reserved exclusively for large organizations with matching budgets.
Strategy 1: Consolidate Computer Network Defense Under One Organization
Strategy 2: Achieve Balance Between Size and Agility
Strategy 3: Give the Security Operations Center the Authority to Do Its Job .
Strategy 4: Do a Few Things Well
Strategy 5: Favor Staff Quality over Quantity
Strategy 6: Maximize the Value of Technology Purchases
One thing that I harp on is that you cannot have a security program or solution at any level of maturity without having a full mapping of your systems and their expected behaviors. At the onset of Strategy 6, Zimmerman brings up that before you can operationalize your SOC, they must be provided with a database of system assets and a method to keep it updated. This database will heavily impact your technology choices and how/where they should be deployed.
Strategy 7: Exercise Discrimination in the Data You Gather
Zimmerman then goes on to provide the reader with best practices of where to obtain data and how to centralize your log processing out of band.
Strategy 8: Protect the Security Operations Center Mission
Strategy 9: Be a Sophisticated Consumer and Producer of Cyber Threat Intelligence
I believe that a mature SOC is able to leverage these feeds in a way that help them identify a similar attack more quickly than they would without the information provided to them in a feed. Cyber Threat Intelligence feeds are not meant to be a magic bullet to help organizations in similar industries quickly detect a persistent threat actor. They are meant to share information to the community so everyone is better able to detect and thwart concerted efforts to undermine your security posture.
Zimmerman goes through the steps of how an organization can establish a Threat Intelligence operation that is internal to your organization. With an internal Threat Team, analyst will not have to ingest and process the contents to understand if it is relevant for their environment. The threat data is an analysis of external and internal data to develop signatures to automatically detect an ongoing attack. Cyber Threat Intelligence teams conduct malware analysis, digital forensics, and in depth network traffic analysis to observe how they have been compromised. This information is then feed back to the SOC team so they are able to detect similar threats in the future.
This is not for the faint of heart and is reserved for the most mature of information security programs.
Strategy 10: Stop. Think. Respond . . . Calmly
If any organization is looking to build out SOC I would recommend they reference Zimmerman's Mitre paper as a place to begin. It gives all the details one would need in a successful SOC covering things like logistics, technology deployment strategies, staffing, and creating a mission to build from.
Zimmerman, Carson. 2014. Ten Strategies of a World-Class Cybersecurity Operations Center
https://www.mitre.org/publications/all/ten-strategies-of-a-world-class-cybersecurity-operations-center
Sunday, November 6, 2016
A primer on HTTP Security Headers
Implementation of HTTP security headers can be a complex and confusing topic. There are several headers that are counter-intuitive and can lead to a compromise in your security posture and have unintended consequences. Below is a list of HTTP security headers, a brief explanation of their function, and security options.
Content Security Policy
Content Security Policy (CSP) is a control implemented that only allows browsers to load content explicitly specified by the operator of the website. This control was put in place as a means to mitigate Cross-site scripting and other attacks via code injected by an attacker. Most browsers either fully or partially support CSP. CSP can be either implemented in block or report mode. Report is a good setting for testing to insure that the implementation is functioning as expected across browsers. Earlier implementations of CSP were the X-Content-Security-Policy and X-Webkit-CSP. These implementations should no longer be used as they have been deprecated and have had numerous issues.
There are several configuration options known as directives for CSP. These options can be configured via a setting in your choice of web software (nginx, apache, IIS) or via Meta tags in your HTML code. The following is a list of available options:
- default-src : "special directive that source directives will fall back to if they aren’t configured." In general this setting should always be set to insure that the default behavior of allowing all resources isn't allowed. A default-src gotcha is that not all directives inherit from default-src. Base-uri, form-action, frame-ancestors, plugin-types, report-uri, and sandbox all need to be explicitly set or will use the browser's default setting. A default-src setting of "self" is generally safe but a more locked down approach would be to set it to "none" and then explicitly state resources you want allowed.
- script-src : sets which scripts the site will execute. This setting has two additional options of unsafe-inline and unsafe-eval which are labelled such because they are potentially dangerous and could expose the site to XSS attempts. Implementation of the unsafe-inline option will allow for <script>, javascript:, URLs, inline event and inline <style> elements. Implementation of the unsafe-eval option will allow for the site to execute dynamic code options.
- object-src : Define from where the site can load plugins. It specifies valid sources for the <object>, <embed>, and <applet> elements. In instances where the site tries to fetch a resource outside of the bounds of what's defined in object-src, an empty HTTP 400 response code is returned.
- style-src : Define where CSS styles can be loaded from. This includes both externally-loaded stylesheets and inline use of the <style> element and HTML style attribute. It is important to note that style-src has no impact on Extensible Stylesheet Language Transformations (XSLT). XSLT is covered by script-src directive.
- img-src : Define from where the site can load images
- media-src : Define from where the site can load video and/or audio,
- frame-src : Define from where the site can embed elements like <frame> and <iframe>
- font-src : Define from where the site can load fonts,
- connect-src : Define from which sources the site can load content using the Fetch (provides an interface for fetching resources locally and across the network), XMLHttpRequest (provides client functionality for transferring data between a client and a server), WebSocket (for creating and managing a WebSocket connection to a server), and EventSource (used to receive server-sent events) Connections
- form-action : Define valid sources that can be used as the action of HTML form elements,
- sandbox: applies restrictions site actions including preventing popups, preventing the execution of plugins and scripts, and enforcing a same-origin policy
- script-nonce : Define script execution by requiring the presence of the specified nonce on script elements,
- plugin-types: Define the set of plugins that can be invoked by the protected resource by limiting the types of resources that can be embedded,
- reflected-xss: instructs a browser to activate or deactivate any heuristics used to filter or block reflected cross-site scripting attacks, equivalent to the effects of the non-standard X-XSS-Protection header,
- report-uri: instructs the browser to report attempts to violate the Content Security Policy
X-XSS-Protection
This header is a feature that is supported by Internet Explorer and Chrome browsers. If malicious input is detected the browser will either remove the script or stop the page from being rendered at all according to how the header is set. Below are the valid protection settings offered by the header:
- 0 - Disables the XSS Protections offered by the browser
- 1 - Enables the XSS Protections
- 1; mode=block - Enables XSS protections and instructs the browser to block the response in the event that script has been inserted from user input, instead of sanitizing.
- 1; report=http://site.com/report - A Chrome and WebKit only directive that tells the browser to report potential XSS attacks to a single URL. Data will be POST'd to the report URL in JSON format.
By default X-XSS-Protection headers are set to 1. This may be the least secure option there is counter-intuitively. An X-XSS-Protection setting of 1 can actually inject new vulnerabilities into your website and has many examples of being bypassed. When implementing this security header the safer options are 1; mode=block and 0.
HTTP Strict Transport Security (HSTS)
HSTS is a security option that instructs the browser that it is only to interact with the website via the secure transport protocol (HTTPS). It prevents attacks that will tell the browser to communicate over clear channels when it should be encrypted. HSTS is implemented by sending the Strict-Transport-Security header over an HTTPS connection. The browser will ignore any HSTS options sent via HTTP. The server sends the Strict-Transport-Security option with a max-age value that instructs the browser to send all future requests via HTTPS over the assigned time period.
HSTS automatically turns insecure links referencing the web page to secure links and instructs browsers to refuse creating a connection if the SSL certificate has an error that could question the legitimacy of the secure connection (certificate expiration, mismatched names, etc).
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
The above example sets the max age to one year, applies HSTS to the domain and subdomains (example.com and mail.example.com), and tells the browser to use a preloaded list of domain names to insure secure connections are created prior to the browser even initiating a connection.
X-Frame-Options
The X-Frame-Options header is used to indicate if a browser should use a <frame>, <iframe> or <object> to build webpages. This header is meant to be a mitigation for clickjacking attacks that trick users into clicking something that is different than what was expected. X-Frame-Options prevent content from being embedded into other sites. The available values for X-Frame-Options are:
- DENY: The Page cannot be displayed in a frame
- SAMEORIGIN: The page can only be displayed in a page from the same origin as the page itself. An origin is defined as a combination of URI, hostname, and port number
- ALLOW-FROM: The page can only be displayed in a frame in the specified URL. This option is only supported by Internet Explorer and Firefox as Chrome and Safari provide this functionality via Content Security Policies.
HTTP Public Key Pinning (HPKP)
Currently supported by Firefox and Chrome, Public Key Pins allow for hashed public key information to be stored in a browser's cache. There are hundreds of Certificate Authorities (CA) that are capable of providing certificates for websites. While unlikely it is possible for an attacker to trick a CA into issuing a certificate to a fraudulent entity. With certificate pinning, website administrators are able to instruct browsers to only trust certain CAs. Below is an example implementation of HPKP:
- Public-Key-Pins: max-age=2592000
pin-sha256="d6qzRu9zOECb90Uez27xWltNsj0e1Md7GkYYkVoZWmM=";
pin-sha256="LPJNul+wow4m6DsqxbninhsWHlwfp0JecwQzYpOLmCQ=";
includeSubdomains;
report-uri="http://example.com/pkp-report"
Max-age specifies the amount of time in seconds the browser will enforce public key pins. Public Key pins require that a primary and backup key be issued. The backup key is in case the current key need to be replaced. They key and backup keys should be created from two independent Certificate Authorities. Above are 2 base64 encoded SHA 256 hashes of the certificates to be passed to the browsers. IncludeSubdomain applies the HPKP to any of the subdomains of the site. Report-uri is used to send any validation failures to specified URI for reporting. Reports are sent in addition to terminating the connection on any detected failures.
HPKP implementation is rare throughout the internet. It's usage has been limited because it can be dangerous to implement. If keys are not managed carefully, it is possible that the website will be rendered unavailable for the duration of the max-age interval.
X-Content-Type-Options
MIME sniffing allows browsers to guess the file type being presented to it by the server by reading bits of it. This is done when file types are not explicitly defined by the web server. An attacker can abuse this behavior by tricking the browser into misinterpreting a file type and downloading malicious content. The X-Content-Type-Options HTTP header allows you to tell the browser that is should not try to guess the file type and only use those that have been defined. This header is only currently supported by Chrome, Safari and Internet Explorer. Below is the implementation:
- X-Content-Type-Options: nosniff
Cross Origin Resource Sharing (CORS)
It allows a website to utilize the resources that are hosted on domains other than itself. Websites often load resources like CSS stylesheets, images and scripts from separate domains. CORS allows for explicit definition of the sites, methods, credentials, and headers allowed across the resources. Implementation of CORS should be as restrictive as possible only allowing necessary domains and methods to operate the site. Below are the list of headers implemented by CORS:
- Access-Control-Allow-Origin: a single url/domain, wildcard (*), or null are the only accepted values. Wildcard allows any domain while null allows no domain.
- Access-Control-Allow-Credentials: allows cookies or other credentials to be passed in cross origin requests when set to true.
- Access-Control-Expose-Headers: indicates which headers are safe to expose. Example values are X-Custom-Header, content-length, etc
- Access-Control-Max-Age: header indicates how long the response can be cached
- Access-Control-Allow-Methods: indicates which HTTP methods can be used when making a request. ie: Access-Control-Allow-Methods: GET, PUT, POST, DELETE, HEAD
- Access-Control-Allow-Headers: indicates which headers that can be used when making a request. ie: Access-Control-Allow-Headers: Origin, Content-Type, Accept
- Access-Control-Request-Method: indicates which method will be used when making a request. ie: Access-Control-Request-Method: GET, PUT, POST, DELETE
- Access-Control-Request-Headers: indicates which headers that will be used when making a request. ie: Access-Control-Request-Headers: Origin, Content-Type, Accept
Additional Reading
X-XSS Protection
http://blog.innerht.ml/the-misunderstood-x-xss-protection/
Public Key Pinning
https://tools.ietf.org/html/rfc7469#section-2.1.3
Cross Origin Resource Sharing
https://www.w3.org/TR/cors/#access-control-request-method-request-header
X-Content-Type-Options
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
HSTS
https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet
Content Security Policy
https://content-security-policy.com/
Subscribe to:
Posts (Atom)