Basic Risk Concepts
|
Basic Risk Concepts |
- Probability versus Possibility
-
Candidates will be able to describe the difference between probability and possibility.
-
Candidates will be able to define phrases as statements of probability or possibility.
- Prediction
-
Candidates will be able to identify that risk analyses are not reliable predictions of future events.
- Risk Management “Stack”
-
Candidates will be able to identify and order the elements within the risk management stack.
|
Terminology
|
Taxonomy |
- Risk
-
Candidates will be able to define risk.
-
Candidates will be able to identify the elements within the FAIR Risk Taxonomy.
- Loss Event Frequency (LEF)
-
Candidates will be able to define LEF.
-
Candidates will be able to describe the difference between LEF and TEF, and identify examples of each.
-
Candidates will be able to identify the factors that drive LEF.
-
Candidates will be able to identify the data type used for LEF.
- Threat Event Frequency (TEF)
-
Candidates will be able to define TEF.
-
Candidates will be able to identify the factors that drive TEF.
-
Candidates will be able to demonstrate an example of malicious TEF.
-
Candidates will be able to demonstrate an example of non-malicious TEF.
-
Candidates will be able to identify the data type used for TEF.
- Contact Frequency
-
Candidates will be able to define contact frequency.
-
Candidates will be able to demonstrate an understanding of an example of contact frequency.
-
Candidates will be able to identify the data type used for contact frequency.
- Random Contact
-
Candidates will be able to define random contact.
-
Candidates will be able to describe examples of random contact.
-
Candidates will be able to identify factors that affect the frequency of random contact.
- Regular Contact
-
Candidates will be able to define regular contact.
-
Candidates will be able to describe examples of regular contact.
- Intentional Contact
-
Candidates will be able to define intentional contact.
-
Candidates will be able to identify an example of intentional contact.
- Probability of Action (PoA)
-
Candidates will be able to define PoA.
-
Candidates will be able to identify the three factors that affect PoA.
-
Candidates will be able to identify the data type used for PoA (%).
- Value
-
Candidates will be able to demonstrate an understanding of an example of how perceived value drives PoA.
-
Candidates will be able to demonstrate an understanding of an example of how changes in perceived value may affect PoA.
- Level of Effort (LoE)
-
Candidates will be able to identify how perceived LoE affects PoA.
-
Candidates will be able to identify how perceived LoE may affect PoA.
- Risk
-
Candidates will be able to describe how perceived risk may affect PoA.
-
Candidates will be able to describe how changes in perceived risk may affect PoA.
- Vulnerability (Vuln)
-
Candidates will be able to define Vuln.
-
Candidates will be able to identify the factors that determine Vuln.
- Threat Capability (TCap)
-
Candidates will be able to define TCap.
-
Candidates will be able to identify the factors that drive TCap.
-
Candidates will be able to describe TCap in the context of a malicious scenario, as well as a human error scenario.
-
Candidates will be able to identify the data type for TCap (%).
- Skills
-
Candidates will be able to describe an example of how threat agent skills can be affected (e.g., by using an obscure technology).
- Resources
-
Candidates will be able to identify the two factors that make up resources.
-
Candidates will be able to describe how affecting time and/or material can affect Vuln.
- Resistance Strength (RS)
-
Candidates will be able to define RS (in a malicious or natural context) and difficulty (in a human error scenario).
-
Candidates will be able to identify the data type for RS (%).
- Loss Magnitude (LM)
-
Candidates will be able to define LM.
-
Candidates will be able to identify and describe the two categories of loss (primary and secondary).
- Primary Loss
-
Candidates will be able to define primary loss.
-
Candidates will be able to describe examples of primary loss.
-
Candidates will be able to identify which forms of loss are most common for primary loss.
- Secondary Loss
-
Candidates will be able to define secondary loss.
-
Candidates will be able to describe an example of secondary loss.
- Secondary Loss Event Frequency (SLEF)
-
Candidates will be able to define SLEF.
-
Candidates will be able to identify the data type for SLEF (%).
- Secondary Loss Magnitude (SLM)
-
Candidates will be able to define SLM.
-
Candidates will be able to identify which forms of loss are most common for secondary loss.
|
Terms |
- Asset
-
Candidates will be able to define asset.
-
Candidates will be able to describe examples of assets.
- Threat
-
Candidates will be able to define threat.
-
Candidates will be able to describe examples of threats.
- Threat Communities
-
Candidates will be able to define threat community.
-
Candidates will be able to describe examples of threat communities.
- Threat Profiling
-
Candidates will be able to define threat profiling.
-
Candidates will be able to describe examples of threat profile elements.
-
Candidates will be able to describe the importance/value of threat profiles.
- Secondary Stakeholders
-
Candidates will be able to define secondary stakeholders.
-
Candidates will be able to describe examples of secondary stakeholders.
- Threat Event
-
Candidates will be able to define threat event.
-
Candidates will be able to describe an example of a malicious threat event.
-
Candidates will be able to describe an example of a non-malicious threat event.
-
Candidates will be able to explain the difference between threat events and loss events.
- Loss Event
-
Candidates will be able to define loss event.
-
Candidates will be able to describe an example of a loss event.
- Primary Stakeholder
-
Candidates will be able to define primary stakeholder.
-
Candidates will be able to describe an example of a primary stakeholder.
- Loss Flow
-
Candidates will demonstrate an understanding of loss flow.
- Forms of Loss
-
Candidates will be able to identify the six forms of loss.
- Productivity
-
Candidates will be able to identify the two types of productivity loss (reduced revenue, unproductive employee time).
- Revenue
-
Candidates will be able to describe an example of revenue loss.
-
Candidates will be able to describe the difference between lost revenue and delayed revenue.
-
Candidates will be able to identify sources of reliable data regarding lost revenue.
- Employee Productivity
-
Candidates will be able to describe an example of resource utilization loss.
-
Candidates will be able to identify sources of data related to the cost of employee time.
- Response
-
Candidates will be able to define response loss.
-
Candidates will be able to identify examples of response loss.
-
Candidates will be able to identify sources of data for response costs.
- Replacement
-
Candidates will be able to define replacement cost.
-
Candidates will be able to describe examples of replacement costs.
- Competitive Advantage
-
Candidates will be able to define competitive advantage loss.
-
Candidates will be able to describe an example of competitive advantage loss.
-
Candidates will be able to identify potentially reliable sources of competitive advantage loss data within an organization.
- Fine and Judgment (F&J)
-
Candidates will be able to define F&J loss.
-
Candidates will be able to describe an example of F&J loss.
-
Candidates will be able to identify potentially reliable sources of F&J data.
- Reputation
-
Candidates will be able to describe reputation damage.
-
Candidates will be able to describe examples of reputation damage.
-
Candidates will be able to identify potential sources of reliable reputation damage data within an organization.
- Controls
-
Candidates will be able to define control.
- Avoidance
-
Candidates will be able to describe examples of controls that reduce the potential for contact with threat agents.
- Deterrence
-
Candidates will be able to describe examples of deterrent controls.
- Resistance
-
Candidates will be able to describe examples of resistive controls.
- Responsive
-
Candidates will be able to describe examples of responsive controls.
|
Results
|
Interpreting Results |
- Candidates will be able to describe frequency and magnitude results from a FAIR analysis.
|
Communicating Results |
- Qualifiers
-
Candidates will be able to describe the purpose for applying qualifiers to the results of an analysis.
-
Candidates will be able to identify the two types of qualifiers.
- Fragile Qualifier
-
Candidates will be able to define the fragile qualifier.
-
Candidates will be able to describe an example of a fragile condition.
- Unstable Qualifier
-
Candidates will be able to define the unstable qualifier.
-
Candidates will be able to describe an example of an unstable condition.
- Qualitative Translation
-
Candidates will be able to identify why translating quantitative results into qualitative values may be useful.
-
Candidates will demonstrate an understanding of challenges associated with defining and using qualitative scales.
- Severity/Significance Scales
-
Candidates will be able to describe the difference between capacity for loss and subjective tolerance for loss.
- Capacity for Loss
-
Candidates will demonstrate an understanding of capacity for loss.
- Subjective Tolerance for Loss
-
Candidates will be able to define subjective tolerance for loss.
- Mapping Quantitative Results to Qualitative Scales
-
Candidates will be able to demonstrate translating quantitative values into qualitative ranges.
|
Business Case Development |
- Candidates will be able to describe the process of developing business cases based on risk analyses.
|
Complementing Other Frameworks |
- Candidates will demonstrate an understanding of how FAIR complements other security assessment frameworks (e.g., ISO).
|
Analysis Process
|
Assumptions |
- Candidates will be able to describe the role assumptions play in analyses.
- Candidates will be able to identify ways of managing the effect of assumptions in analyses.
|
Scoping/Definition- |
- Scoping/Definition
-
Candidates will be able to describe why scenario scoping and definition is important.
-
Candidates will be able to describe examples of how an inadequately scoped analysis may become challenging.
- Loss Event Definition
-
Candidates will demonstrate an understanding of why a clear loss event definition is critical.
-
Candidates will be able to describe an example of a loss event.
- Identifying Relevant Threat Communities
-
Candidates will be able to define threat community.
- Threat Profiling
-
Candidates will be able to define threat profiling.
-
Candidates will be able to identify advantages to performing threat profiling.
-
Candidates will be able to identify potential threat profile parameters.
- Identifying the Asset(s)
-
Candidates will be able to define asset.
-
Candidates will be able to describe why a clear definition of the assets at risk is critical in performing good analyses.
- Identifying Event Vectors
-
Candidates will be able to define threat vector.
-
Candidates will be able to describe why differentiating threat vectors in an analysis can be important.
- Identifying Types of Threat Events
-
Candidates will be able to describe an example of a malicious scenario.
-
Candidates will be able to describe an example of an error scenario.
-
Candidates will be able to describe an example of a failure scenario.
-
Candidates will be able to describe an example of a natural scenario.
- Scenario Parsing
-
Candidates will be able to identify key considerations that are important when deciding whether to combine or decompose scenarios.
|
Documenting Rationale |
- Documenting Rationale
-
Candidates will be able to describe why documenting measurement rationale is important.
- Good versus Bad Documentation
-
Candidates will be able to identify good versus bad rationale documentation.
|
Choosing Abstraction Level |
- Choosing Abstraction Level
-
Candidates will be able to identify the reasons for choosing higher or lower levels of abstraction in analysis.
- Data Quality
-
Candidates will be able to identify good versus poor data.
-
Candidates will be able to identify the characteristics of good data.
- Diminishing Returns
-
Candidates will be able to describe the principle of diminishing returns within the context of choosing an abstraction level for analysis.
|
Finding Data |
- Finding Data
-
Candidates will be able to identify potential sources of information for various risk factors.
-
Candidates will be able to describe the difference between good and poor sources of data.
- Subjective versus Objective Data
-
Candidates will be able to describe the difference between questions that elicit more subjective data versus more objective data.
|
Troubleshooting Analyses |
- Troubleshooting Analyses
-
Candidates will be able to identify different methods for troubleshooting analyses.
- Using the Taxonomy
-
Candidates will be able to describe how to use the taxonomy to resolve disagreements between analyst estimates.
- Multiple Outcomes
-
Candidates will be able to describe how to use multiple analysis outcomes to resolve disagreements between analyst estimates.
- Evaluating Assumptions
-
Candidates will demonstrate an understanding of the role different assumptions can play in analyst estimate disagreements.
|
Measurement
|
Calibration |
- Calibration
-
Candidates will be able to describe the purpose of calibration.
- Starting with the Absurd
-
Candidates will be able to describe the purpose for starting with absurd estimates.
-
Candidates will be able to describe an example of starting with the absurd.
- Decomposing the Problem
-
Candidates will be able to describe the purpose for decomposing the problem within the context of making estimates.
-
Candidates will be able to describe an example of decomposing a problem.
- Testing Confidence using the Wheel
-
Candidates will demonstrate an understanding of the purpose for using the wheel when making calibrated estimates.
-
Candidates will be able to describe an example of using the wheel to make calibrated estimates.
- 90% Confidence Overall
-
Candidates will be able to describe an example of using the wheel to make calibrated estimates with 90% confidence.
- 95% Confidence at Each End
-
Candidates will be able to describe an example of using the wheel to make calibrated estimates with 95% confidence at either end of the range.
- Challenging Assumptions
-
Candidates will be able to describe the purpose for challenging assumptions when estimating.
|
Distributions |
- Candidates will demonstrate an understanding of the advantages of using distributions when making measurements.
- Candidates will be able to identify the four parameters used when making estimates in FAIR analyses.
|
Most Likely Values |
- Candidates will be able to describe what the most likely value in a distribution represents.
- Candidates will be able to define mode within the context of a distribution.
|
Monte Carlo |
- Candidates will be able to describe how Monte Carlo works.
- Candidates will be able to identify the primary advantage of using Monte Carlo.
|
Accounting for Uncertainty |
- Range Confidence
-
Candidates will be able to describe how confidence in the end-points of a range is determined.
- Curve Shaping
-
Candidates will be able to describe how confidence in the end-points of a range is determined.
|
Accuracy versus Precision |
- Candidates will be able to describe the difference between accuracy and precision.
- Candidates will be able to describe the primary concern regarding precision.
- Candidates will be able to describe an example of an estimate that is precise but inaccurate.
- Candidates will be able to describe an example of an estimate that is accurate but not precise.
- Candidates will demonstrate an understanding of the concept of “useful degree of precision”.
|
Subjectivity versus Objectivity |
- Candidates will demonstrate an understanding of the difference between objectivity and subjectivity.
- Candidates will be able to describe an example of data that is more subjective in nature.
- Candidates will be able to describe an example of data that is more objective in nature.
- Candidates will demonstrate an understanding that pure objectivity is not achievable.
|
Deriving Vulnerability (Vuln)
|
- Deriving Vulnerability (Vuln)
-
Candidates will be able to describe the process of deriving Vuln using TCap and RS estimates.
- Threat Capability (TCap) Continuum
-
Candidates will be able to define TCap continuum.
- Defining a Threat Community TCap Distribution
-
Candidates will be able to describe how to estimate the TCap for a threat community.
-
Candidates will be able to describe what the minimum, maximum, and ML points on a TCap distribution represent.
|
Ordinal Scales |
- Candidates will demonstrate an understanding of limitations associated with ordinal scales.
|
Diminishing Returns |
- Candidates will demonstrate an understanding that more effort in gathering data is not always offset by material improvements in analysis quality.
|
The Open Group Certification for People: FAIR Certification Program
|
The Open Group Certification for People: FAIR Certification Program |
- The Candidate must be able to explain The Open Group Certification for People: FAIR Certification Program, and distinguish between the levels for certification as an advanced certification level is developed.
|