SecurityHowTo - henk52/knowledgesharing GitHub Wiki

Security How to

Introduction

Purpose

Describes the purpose of security analysis in applications and infrastructure, and how to establish and maintain security practices over time.

  • #1 Rule - You're never secure, you mitigate risk.

Vocabulary

  • 2FA - Two-factor Authentication
  • AES - Advanced Encryption Standard. 128, 192 and 256 bit.
  • ASM - Application Security Maturity
  • Attack Surface - The sum of entry points exposed to a potential attacker.
  • BSIMM - Building Security In Maturity Model (lessons learned).
  • Bug - An implementation-level software problem that may exist in code but will never be executed; relatively easy to discover and to remedy.
  • CA - Certificate Authority. A trusted entity that issues SSL/TLS certificates.
  • CAPTCHA - Completely Automated Public Turing test to tell Computers and Humans Apart
  • CC - Common Criteria for information technology security evaluation.
  • CIS - Center for Internet Security
  • CISA - Cybersecurity and Infrastructure Security Agency
  • CLASP - Comprehensive Lightweight Application Security Process.
  • CORS - Cross-Origin Resource Sharing.
  • CSA - Cloud Security Alliance
  • CSF - Cybersecurity Framework
  • CSP - Content Security Policy
  • CVE - Common Vulnerabilities and Exposures. Risk management framework.
  • CVSS - Common Vulnerability Scoring System. Standardized risk rating system.
  • CWE - MITRE Common Weakness Enumeration
  • DAST - Dynamic Application Security Testing.
  • DREAD - Damage potential, Reproducibility, Exploitability, Affected users, Discoverability. Risk rating scheme by Microsoft.
  • Fault injection - Compile-time or run-time injection of errors to test resilience.
  • FIPS - Federal Information Processing Standard.
  • Flaw - An architectural or design-level problem that can result in serious security issues and that can be much more expensive to fix than implementation-level errors.
  • FSR - Final Security Review.
  • Fuzzing - Fault injection at run time.
  • HVA - High Value Asset
  • IDS - Intrusion Detection System
  • IPS - Intrusion Prevention System
  • MFA - Multifactor Authentication
  • NEAT - Necessary, Explained, Actionable, and Tested (security UX framework)
  • NIST - U.S. National Institute of Standards and Technology.
  • NVD - National Vulnerability Database
  • OCTAVE - Operationally Critical Threat, Asset and Vulnerability Evaluation. Risk management framework.
  • OWASP - Open Web Application Security Project
  • PA-DSS - Payment Application Data Security Standard.
  • Penetration testing - Hands-on attack simulation that can adapt to business logic and proprietary protocols.
  • SAMM - Software Assurance Maturity Model (OWASP)
  • SAR - Security Assurance Requirements
  • SAST - Static Application Security Testing.
  • SDLC - Secure/Software Development Life Cycle
  • SEIM - Security Event and Incident Management
  • SFR - Security Functional Requirements
  • SHA - Secure Hashing Algorithm
  • SOP - Same-Origin Policy
  • SSL - Secure Socket Layer. Broken; do not use.
  • STRIDE - Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privilege. Risk categorization scheme by Microsoft.
  • TLS - Transport Layer Security.
  • Vulnerability scan - Pre-defined automated scans. Can scan proprietary protocols. No findings does not mean no vulnerabilities.
  • VPN - Virtual Private Network
  • XSS - Cross-Site Scripting

References

Security Objectives

All security work maps to:

  • Availability - users always able to access the service/data.
  • Confidentiality - Data can only be accessed by authorized parties.
  • Integrity - The guarantee that the data is in its original state, without modification.

Define which objectives apply to your system before starting threat modeling or design review.

Use your security goals to:

  • Filter the set of applicable design guidelines

  • Guide threat modeling

  • Scope and guide architecture and design reviews

  • Help set code review objectives

  • Guide security test planning and execution

  • Guide deployment reviews

  • What do you not want to happen.

  • What to protect

  • Protect intangible assets

Examples:

  • Prevent attackers from obtaining data
  • Meet service-level agreements
  • Protect the company's credibility

Availability

Users can always access the service. Protect availability with:

  • Component redundancy
  • Resistance to denial-of-service (DoS) attacks
  • Fault-tolerant and failure-tolerant software
  • Software patching / upgrading
  • Communication Bandwidth
  • Hardware maintenance

Confidentiality

Data can only be accessed by authorized parties.

Control how assets are stored, who can access them, and under what circumstances.

  • What
    • Consistency
    • Accuracy
    • Trustworthiness
    • Validity
  • How
    • Checksums and hashes
    • Digital signatures
    • Encrypted transport
    • Validate
      • data - that the data has not been changed inappropriately, either accidentally or deliberately.
      • source - who sent the data and where the data came from. This would require a system to validate the data chain.
    • Encryption: makes data or transmissions unreadable to unauthorized parties.
    • Authentication: proving who users are using something they know, have, or are.
    • Authorization: limiting access to resources on a need-to-know basis.

Confidentiality is achieved by limiting information access and disclosure to only authorized users and preventing access by or disclosure to unauthorized users.

Integrity

The guarantee that data is unaltered from its original state. Protect integrity with:

  • Checksums and hashes
  • Digital signatures
  • Encrypted transport
  • Data validation: confirm data has not changed accidentally or deliberately.
  • Source validation: confirm who sent the data and where it came from.

Data pedigree is a record of the ancestry of data items and metrics of their estimated reliability. Meeting integrity requirements means:

  • Preventing authorized users from making improper modifications.
  • Preventing unauthorized users from making any modifications.
  • Maintaining internal and external consistency of data and programs.

Implementation and execution overview

Getting started

  • Ramp up phase
    • Establish security requirements
    • Identify key personnel
      • Identify privacy expert
      • Identify security expert
    • Create quality gates/bug bars
    • Document how security information is to be handled
      • where to document
      • who has access
      • how to handle new security issues being discovered
  • Create the first threat model
  • Validate the threat model
  • Create an incident response plan
  • Explore mitigations
  • Identify core security training

Sprint activities

  • Update the threat models
  • Perform static code analysis
  • Fixing issues identified by code analysis tools
  • Encoding all untrusted web output.
  • Final Security Review(FSR)
    • all 'every-sprint' requirements have been completed.
    • At least one requirement form each sub-bucket list has been completed TODO where are these sub-bucket lists defined?
    • No security bug that ranks higher than the designated sprint bug bar is open
  • Conduct security deployment reviews(eng211) - TODO what is this?

Regular activities

Security practices completed on a regular basis, spread across the project lifetime. Also called bucket practices.

  • Creating the bug bars
  • Conducting attack surface reviews
  • Only one requirement from each bucket per sprint TODO what does this mean?
  • Product teams decide what tasks to address TODO when where?
  • No requirement can be ignored
  • Verification task
    • Interface Fuzzing
    • file fuzz testing
    • Attack surface analysis
    • Binary analysis
    • penetration testing?
  • Design review
    • Conduct a privacy review TODO
    • Review crypto design
    • Assembly naming and APTCA TODO
    • User Account Control TODO
  • planning
    • Update privacy support documents TODO
    • Update security response contacts
    • Update network down plan
    • Define/Update security bug bar
  • update design guidelines
    • Guidelines must be:
      • Actionable - Associated with a vulnerability that can be mitigated through the guideline
      • Relevant - Associated with a vulnerability that is known to affect real applications TODO
      • Impactful - Represents key engineering decisions that will have wide-ranging impact; design mistakes can have a cascading impact on your development life cycle

Yearly activities

  • Security education
  • Evaluate security requirements
  • Evaluate tooling and automation
  • review threat model

Threat model analysis overview

  • Threat models - what threats can affect your software. Classify threats and prioritize vulnerabilities.
    • The process of decomposing a systems architecture and identifying key structural elements and system assets which are valuable resources to be protected
      • system entry and out points
      • data and control flows
      • security mechanisms
      • and potential attackers to highlight applicable risks and associated attacks against the system.

Purpose of threat modeling

  • Produce software that is secure by design.
  • Think about and discuss product security in a structured way.
  • Allow the development team to predictably and effectively define security problems early in the process.
  • Document and share application security knowledge.
  • Get a new perspective in order to overcome “creator blindness”.
  • Inform the secure development process.
  • Heighten awareness of the customer’s security perspective.

Common benefits of threat modeling:

  • Think beyond canned attacks
  • Identify Top-N lists, attackers and Doomsday Scenarios.
    • Doomsday scenario express extreme situations that could threaten your organization or even cause it to go out of business.
    • You should build your software to account for these situations and to avoid or mitigate them.
  • Identify where threat agents exist relative to the architecture.
    • including insiders.
  • Identify components that need additional protection.
    • Highlight assets, risks and flaws in your system's design.
    • Determine which components are likely to be targeted by attackers and how they will be attacked.
      • Put additional security in or remove the functionality.
  • Determine whether business or security objectives can be met.

Simplified Threat Sheet Example

TODO Simplified sheet

Ramp up phase

Establish security requirements

Questions to ask(eng211):

  • Tangible Assets to Protect
    • Are there user accounts and passwords to protect?
    • Is there confidential user information (such as credit card numbers) that needs to be protected?
    • Is there sensitive intellectual property that needs to be protected?
    • Can this system be used as a conduit to access other corporate assets that need to be protected?
  • Intangible assets to protect
    • Are there corporate values that could be compromised by an attack on this system?
    • Is there potential for an attack that may be embarrassing, although not otherwise damaging?
  • Compliance requirements
    • Are there corporate security policies that must be adhered to?
    • Is there security legislation you must comply with?
    • Is there privacy legislation you must comply with?
    • Are there standards you must adhere to?
    • Are there constraints forced upon you by your deployment environment?
  • Quality of service requirements
    • Are there specific availability requirements you must meet?
    • Are there specific performance requirements you must meet?

Legislation in the US

  • Sarbanes Oxley(SOX): penalties for exposing or falsifying financial data.
  • Gramm-Leach Bliley ACT(GLBA): Protect consumers' personal financial information held by financial institutions.
  • General Data Protection Regulation(GDPR): personal data protection.
  • Health Insurance Portability and Accountability Act(HIPAA)
    • Health Information Technology for Economic and Clinical Halth Act(HITECH): requirements for sec and priv of health care info.
  • Payment Card Industry Data Security Standard(PCI-DSS)
    • Payment Aplication Data Security Standard(PA-DSS)

Ask the questions:

  • What specific assets need to be protected?
  • What are your compliance requirements?
  • What are your quality-of-service requirements?
  • What data is considered confidential?

First Threat Model

The steps

  • Preparation for threat model analysis
  • For each significant component
    • Attack resistance analysis
    • Construct a threat tree
    • Fill out the DREAD columns
      • Damage
      • Reproducibility
      • Exploitability
      • Affected users
      • Discoverability
    • Develop a mitigation strategy
  • Validate the threat model
  • Report findings

Significant components covers:

  • Software
  • Infrastructure

Preparation for threat model analysis

The key entry point in the threat model analysis are the interfaces to the product:

  • Enumerate all the interfaces
  • Interaction with other services
  • all incoming and outgoing traffic endpoints
    • List open ports
  • network boundaries
  • security boundaries
  • Delivery pipeline
    • What does the pipeline deliver, where and how
    • Who has access and how

An interface can be:

  • Network port.
  • Configuration files.
  • Environment variables.
  • Input fields, in the application.
  • External libraries.
  • External images.

Input to Threat Modeling

  • Architecture diagrams
  • Deployment/configuration guides
  • Source code
  • Penetration test reports
  • Users guides
  • Interviews; with
    • Business analyst
    • Tech proj lead
    • Architect
    • Lead developer
    • Application component developer
    • Build engineer
      • Requirements engineer
      • QA engineer
      • Product support engineer
      • Network specialist
  • Requirements specifications
  • use cases
  • Actually using the running system

Create the Architecture diagrams

Ideally provide the Software architecture documents for the entire product.

Coverage includes:

  • Application
  • Infrastructure
    • Deployment environments.
    • Pipelines.

In the Architecture framework called 'Software Architecture Documentation' the following view packets are of relevance:

  • Module Uses - Shows what each module uses to work.
    • e.g.
      • What external/internal libraries a module uses.
      • What images and components a container uses.
  • Component and connecters - shows what a component communicates with both internal and external.
    • All the view packets are relevant.
  • Allocation, Deployment - show where the component is installed.
  • Allocation Install - show how the component is installed/deployed into its environment.

Create Application overview

If you do not have a Software Architecture Document, start with the application overview and then populate the relevant parts of the Architecture Document.

  • Draw the end-to-end deployment scenario
    • Draw servers
    • connections
    • protocols
    • Application stacks
    • Clearly mark what is internet, intranet, single server etc
    • list ports
    • Authentications being used
  • Identify key usage scenarios
    • What are the important features of you application? What does it do?
    • Identify the applications main functionality and usage
      • With a primary focus on the create, read, Update and Delete functionality
    • Also look at several scenarios happening simultaneously
    • Identify which scenarios are out of scope
  • Identify technologies
    • List the technologies and key features of the sw and platforms that you use
      • Operating systems
      • Web server software
      • Application frameworks
      • Database server software
      • Development languages
  • Trust Modeling - defining which entities (users, services, systems) are trusted, to what degree, and under what conditions.
    • Trust Zones are logical boundaries that group components sharing the same trust level.
      • When data crosses a trust zone boundary (e.g., from the internet into your internal network), that crossing is a potential attack surface and warrants scrutiny

Attack resistance analysis

Assess the systems ability to withstand known types of attacks:

  • Identify general categories of risks.
    • preferably use existing model or checklist
      • STRIDE - Microsoft
        • Focuses most the threat end
        • also useful in enumerating vulnerabilities
      • CAPEC(Common Attack Pattern Enumeration and Classification)
        • Comprehensive matrix of attacks
        • useful for identifying vulnerabilities as well as mitigating and detecting vulnerabilities in production applications
        • Dictionary of attack patterns
      • ATT&CK
        • Enumerate dozens of attacks under twelve major categories
        • includes models for different server models
  • Map attack patterns to each identified risk using attack or vulnerability checklists.
  • Identify architecture elements that could be affected by these attacks.
  • Determine if the controls placed around the identified elements are sufficient to thwart the corresponding attacks.

Examples of attack patterns

  • The Seven Pernicious Kingdoms
  • The 24 deadly sins of software security
    • Web application sins
      • SQL injection
      • Web server-related vulnerabilities (XSS, XSRF, response splitting)
      • Web client-related vulnerabilities (XSS)
      • Magic URLs, predictable cookies, hidden form fields
    • Implementation sins
      • Buffer overruns
      • Format string problems
      • Integer overflows
      • C++ catastrophes
      • Catching exceptions
      • Command injection
      • Failure to handle errors correctly
      • Information leakage
      • Race conditions
      • Poor usability
      • Not updating easily
      • Executing code with too much privilege
      • Failure to protect stored data
      • Mobile code sins
    • Cryptographic sins
      • Weak password-based systems
      • Weak random numbers
      • Using cryptography incorrectly
    • Networking sins
      • Failing to protect network traffic
      • Improper use of PKI, especially SSL
      • Trusting network name resolution
  • OWASP Top 10
    • A1-Injection
    • A2-Broken authentication and session management
    • A3-XSS
    • A4-Insecure direct object references
    • A5-Security misconfiguration
    • A6-Sensitive data exposure
    • A7-Missing function level access control
    • A9-Cross-site request forgery (CSRF)
    • A10-Unvalidated redirects and forwards
    • OWASP Top 10 2013
    • OWASP Top Ten Cheat Sheet
  • SANS Top 25

Construct a threat tree

To construct a threat tree:

  • Start by listing a high-level threat.
  • Ask what specific sub-threats or conditions could enable that threat and list them as child nodes.
  • Repeat this process for each new node.
  • Stop when sufficient detail is provided, or when a threat can no longer be decomposed.

Example:

  • Tampering with process
    • Corrupt state
      • input validation failure
      • Access to memory
    • Tampering with subprocess
    • provide false credentials
      • Failure to check call chain
        • callers
        • callees
      • spoofing external entity

Develop a mitigation strategy

There are four basic mitigation methods to address threats. Listed in order of preference, they are:

  • Redesign the code or application to eliminate the threat (the threat no longer applies).
    • Redesigning is the only way to truly eliminate a threat, so start here.
    • Ask whether particular functionalities are necessary. If not, remove them. If so, design them with security as a guiding principle.
  • Implement well-understood threat mitigation techniques.
    • Choose a design known to mitigate most threats.
    • Consult your security advisor; investigate what other applications have done in similar situations; use the Microsoft SDL Threat Modeling Tool.
  • Invent new threat mitigation techniques.
    • Use only when no pre-existing techniques exist, or when application design prohibits standard techniques.
    • Proceed with caution and have the security advisor check the work.
    • This is an inherently risky and time-consuming alternative which can easily affect other properties of an application, such as performance or scalability.
  • Accept the risk.
    • Consult the bug bar before choosing this option.

MS SDL default mitigations

  • Spoofing-Authentication:
    • To authenticate principals (users or machines):
      • Basic authentication
      • Digest authentication
      • Cookie authentication
      • Kerberos authentication
      • Public Key Infrastructure PKI systems such as TLS and certificates
      • IPSec
      • Digitally signed packets
    • To authenticate code or data:
      • Digital signatures
      • Message authentication codes
      • Hashes
  • Tampering-Integrity:
    • Windows Mandatory Integrity Controls
    • Access Control Lists (ACLs)
    • Digital signatures
    • Message Authentication Codes (MACs)
  • Repudiation-Non repudiation:
    • Strong Authentication
    • Security logging and auditing
    • Digital Signatures
    • Secure time stamps
    • Trusted third parties
  • Information Disclosure-Confidentiality:
    • Encryption
    • ACLs
  • Denial of Service-Availability:
    • ACLs
    • Filtering
    • Quotas
    • Authorization
    • High availability design
  • Elevation of Privilege-Authorization:
    • ACLs
    • Group or role membership
    • Privilege ownership
    • Permissions
    • Input validation

Populating the threat sheet

Threat model sheet

TODO show an example sheet here

Should contain:

  • Tag - An ID number for the threat.
  • Title - Short descriptive title to sum up the issue.
  • Reference to attack surface
  • Description of the threat.
  • Actor - what type of Actor could execute this threat.
  • Category - STRIDE category.
  • Objective affected - What object/entity/component is affected.
  • Outcome - what can happen in case this threat is successfully executed.
    • This can be a list of multiple things.
  • Ticket - tag or reference to the ticket system used for tracking these issues.
  • Status
    • New - Arrived and no one has looked at it.
    • Backlog - when priority has been assigned.
    • Open - being investigated.
    • Blocked - Waiting for external response.
    • In review -
    • Closed - Addressed or dropped.
    • Ignored? - wont fix?
  • Risk - See DREAD calculations below.
    • This value is based on the DREAD columns.
  • Dread - OWASP Threat Modeling Cheat Sheet
    • DREAD model reference (David LeBlanc, Microsoft)
    • Damage - how bad would an attack be?
      • Damage potential must be judged in context. For a server app, any crash is typically serious; if Notepad crashes non-exploitably, that is a low-priority issue.
      • Impact?
      • Rating:
        • High:
          • Loss of service availability(How many it affects is defined in 'Affected users')
          • Exposing of customer data(confidentiality)
          • False information can be sent to valid customers(integrity)
          • Loss of X% of yearly income.
        • Medium:
          • Loss of login ability <2s
          • Loss of Y% of yearly income.
        • Low:
          • Loss of Z% of yearly income.
    • Reproducibility - how easy it is to reproduce the attack?
      • how easy it is to reproduce the attack?
        • High:
          • Easy to reproduce, can be described in less than two pages of instructions.
        • Medium:
          • Requires description of the logic like state machines or significant protocol knowledge
        • Low:
          • Even the admins/devs have a hard time reproducing this
    • Exploitability - how much work is it to launch the attack?
      • Worst case is anonymous access, and it can be scripted.
        • Such things can be turned into worms.
      • Best case is that it requires a fairly high level of access, such as authenticating as user, or it requires a special configuration, or maybe an information leak.
        • Maybe you have to guess a certain web page's path and name, and then launch it.
        • Some exploits take a penetration tester an hour or more to set up manually.
      • Rating:
        • High:
          • No access or privilege needed.
        • Medium:
          • You have to be member of a small(10-15?) group.
        • Low:
          • It has to occur on in a short(5s) timing window that occurs randomly less than every 2 months.
    • Affected users - how many people will be impacted?
      • Rating:
        • High:
          • More than 15% of our customers.
        • Medium:
          • 1-15% of our customers
        • Low:
          • Less than 1% of our customers.
    • Discoverability - how easy it is to discover the threat?
      • perhaps the most controversial component, but good information to have.
      • Avoid taking comfort from low discoverability; high discoverability warrants immediate concern.
      • Rating:
        • High:
          • Something that is highly discoverable is publicly known, or very similar to something that is publicly known. No access or privilege needed.
        • Medium:
          • Dev or admin has access to the knowledge.
        • Low:
          • Low discoverability is that it takes intimate knowledge of the internal workings of your app to sort out.
            • Low discoverability is risky: what seems hard to find may be published publicly tomorrow.
  • mitigation
    • Explanation of the mitigation. Explain what the mitigation is and why it works.
  • Validation?
  • Investigation notes - Include things you learned while investigating.
  • Entry points that are impacted? - This may be "see description" to reduce redundancy.
  • Protected resources? - for which access is affected by particular mitigations. This might be in the form of "see description" to minimize redundancy.

DREAD Calculations

DREAD model reference (David LeBlanc, Microsoft)

Rate all entries:

  • 1 = low
  • 2 = medium
  • 3 = high

Base severity = Damage + RA_Bonus

RA_Bonus:

  2: R + A > 4
  1: R + A > 3
  0: otherwise

priority_factor: Explore and Discovery addition.

  4: E = 3 and Di = 3
  3: E = 3, Di = 2  (or E = 2, Di = 3)
  2: E = 3, Di = 1  (or E = 1, Di = 3)
  1: E = 2, Di = 2
  0: otherwise
Fix value Fix text
1 Won't fix
2 Consider next rel.
3 Next release
4 Consider SP
5 SP
6 Consider bulletin
7 Bulletin
8 Consider high pri
9 High Priority Bulletin

| Value | Suggestion | Meaning | | 1 | Won't fix | Dropped | | 2| Backlog | Low priority, pick up eventually | | 3 | Next sprint | Planned for upcoming sprint | | 4 | This sprint | Already in current sprint | | 5 | Hotfix | Deploy outside normal cycle | | 6 | Critical | Needs immediate attention | | 7 | Incident | Production is impacted now |

Spreadsheet formulas for automation:

  • Base severity: =I2+IF((J2+L2)>4,2,IF((J2+L2)>3,1,0))

    • 'I2' etc refers to column and line in a sheet.
    • =M2+IF((O2+S2)>4,2,IF((O2+S2)>3,1,0))
  • ExDi: =IF(AND(Q2=3,U2=3),4,IF(AND(Q2=3,U2=2),3,IF(AND(Q2=2,U2=3),3,IF(AND(Q2=3,U2=1),2,IF(AND(Q2=1,U2=3),2,IF(AND(Q2=2,U2=2),1,0))))))

  • Fix text

    • =IFS( AC2=1, "Won't fix", AC2=2, "Backlog", AC2=3, "Next sprint", AC2=4, "This sprint", AC2=5, "Hotfix", AC2=6, "Critical", AC2=7, "Incident", AC2=8, "Incident", AC2=9, "Incident", TRUE, "" )

OWASP Top Ten 2017 - About Risks

Vector Prevalence Detectability Impact Probability Rating

Validating the threat model

  • Validating the Model:
    • To validate the threat model, you must ensure that it fully describes the application and that it accurately portrays potential threats.
    • The set of diagrams composing the threat model should be signed off by development leads, test leads and program managers.
    • Diagrams should adhere to certain guidelines; for example, Your diagrams should not be as detailed as a flowchart, class diagram, or call graph.
    • Each diagram should contain at least one trust boundary. Otherwise why are you drawing it?
    • Are items in the diagram numbered? If not, it's easy to miss some as you transfer the data to other documents.
    • Does data flow magically between databases? There needs to be a process in between. Does data magically appear? Data originates from external actors, not from data stores.
    • Are there data sinks? Data is placed in a data store for a reason. Either the data is used by external actors or it's wasting disk space.
    • Are all elements on the diagram clearly labeled? Are any data flows labeled with generic terms such as “read”, “write”, or “query”? If so, give them a more descriptive name.
  • Validating the Threats:
    • Ensure that for each element on the threat diagram, there is a corresponding set of threats.
    • Does each process element have an associated threat? Does each data store element have at least one threat for each of T, R, I, and D?
    • Adding numbers to the threat diagram can be very helpful here, especially when there are numerous threats for a given element.
  • Validating the Mitigations:
    • For each mitigation, note whether it is a standard or custom-made mitigation.
    • For all custom-made mitigations, document why a standard mitigation was not used. This will allow effort to be focused on whether the custom-made mitigation works as intended.
    • Each mitigation should be tracked with the project's issue/bug tracking system.
    • Have developers investigate each mitigation design, hand it off to test, and close the item when correctly implemented and tested.
    • Tags can be used to track issues identified by threat modeling.
    • All trust boundaries should be evaluated for suitability for fuzz testing.
    • Testers should parse the threat models carefully looking for areas where other such tests make sense.
    • Testers should also focus on disconnects between specified behavior and actual behavior as this is often where security issues are found.
    • Consider what assumptions are being made in the specifications, as well as in the product.
    • Testers should create test cases to validate all documented assumptions since bad assumptions can lead to vulnerabilities.
    • Once all identified threats and mitigations have been properly tested, there should be no remaining paths of attack.
    • At this point, it is useful to re-check for any new threats that might not have been originally identified.
  • Validating Dependencies and Assumptions:
    • Throughout the threat modeling process, consider what assumptions are being made regarding external modules.
      • A hypothetical assumption might be “HTTP.sys protects against cross-site scripting”. This assumption is actually false; HTTP.sys does not protect against cross-site scripting.
    • A good way to validate assumptions is to talk with the owners of each module pertaining to the assumption, for example the owner of HTTP.sys.
    • A good way to start the discussion is by asking for their threat models and to provide them with your assumptions.

Other Notes

Security best practices

  • Input validation

    • Document the data flow
    • keep it centralized
    • keep it balanced (security vs user experience(e.g. dont reject input if you could handle it automatically, like removing extra spaces)
    • Use a library
    • order of validation
      • canonicalization
      • whitelist validation
      • encoding
  • Least privilege

    • what
      • Reduces the attack surface
      • Limits capability after a successful attack
    • how
      • Using limited-user account context
      • Removing write privileges for web application users
      • Configure the firewall to allow only HTTPS (or HTTP)
      • Setting file permissions that prevent modification of web content files
    • method
      • Start with nothing;
      • Segment your application; easily assign roles
      • Grant temporary privilege and revoke upon completion
      • Have stakeholder buy in
  • Defense in depth

    • E.g.
      • Implement a web app firewall
      • Implement web server and other platform protections
      • Hardening the server's OS
      • Properly validating all application input
      • Setting database constraints to ensure proper data formats
      • creating audit logs to track application operations
  • Cryptography

    • In transit
    • at rest
    • integrity
    • confidentiality
  • Install anti-virus and other security SW as appropriate

  • Consider using a hardening guide or tool appropriate for your app/OS

  • Ensure that the servers are physically secure

  • Disable or rename default accounts

  • establish strong password policies

  • keep app/system up to date.

  • Ensure the proper auditing and log file management is in place

  • Proper file and directory access rights

  • Regularly audit the full system configuration

  • Use sw to perform regular vulnerability scans

  • Manage system conf settings with version control sw.

  • Deploy intrusion detection systems to identify any overlooked misconfigurations.

  • Monitor search engines??? id possible info leaks

  • Utilize log analysis or event management sw to identify unusual system activity

  • Logging for:

    • Recording security incidents and policy violations
    • Maintaining evidence for legal proceedings
    • Gathering information on app errors.
    • Detecting and alerting to possible intrusions
    • Measuring application performance
    • Maintaining an audit log for investigation and forensics
  • Ensure proper log content

    • Sanitize output
      • Invalid characters, neutralize executable content, markup, excessive length.
    • Ensure integrity
      • Least privilege
      • Read-only archives
      • Checksums and signatures
      • transmit with encryption
    • Maintain credibility (for legal purposes)
      • Establish a process
      • synchronize time
      • correct time zones
      • verify logging
    • Analyze logs
      • correlate
      • use SEIM - Security Event and Incident Management (system/solution)

Regular activities HowTo's

Creating the bug bar

The bug bar defines the maximum number of security issue at each severity level for deployment:

  • High severity - fix before deployment.
  • Low severity - acceptable to deploy with known issues.

The possible values for level of severity are:

  • Critical
    • Impact across the enterprise and not just the local LOB application/resources
    • Exploitable vulnerability in deployed production application
  • Important
    • Exploitable security issue
    • Policy or standards violation
    • Affects local application or resources only
    • Risk rating = High Risk
  • Moderate
    • Difficult to exploit
    • Non-exploitable due to other mitigation
    • Risk rating = Medium risk
  • Low
    • Bad Practice
    • Non-exploitable
    • Not directly exploitable, but may aid in other attacks
    • Risk rating = minimal risk

See also:

Examples with STRIDE categorization

First be aware at the risk level before deciding whether or not to accept the risk.

  • Critical:

    • [E] Remote elevation of privilege by anonymous users.
    • [E] Remote execution of arbitrary code by anonymous users.
  • Important:

    • [D] Remote denial of service that can be easily exploited by anonymous users.
    • [E] Remote elevation of privilege by authenticated users.
    • [E] Remote execution of arbitrary code by authenticated users.
    • [I] Information disclosure that allows attackers to obtain information from anywhere in the system.
    • [S] Spoofing of a specific computer or user.
    • [T] Permanent modification of any user data or data used in making security decisions in a common or default scenario. Such modification is to persist after a restart of the system
  • Moderate:

    • [D] Denial of service that requires extensive efforts by anonymous or authenticated users
    • [I] Information disclosure that allows attackers to obtain information from known locations in the system. Such locations are not intended to be exposed.
    • [S] Spoofing of a random computer or user.
    • [T] Permanent modification of any user data or data used in making security decisions in a specific scenario. Such modification is to persist after a restart of the system.
  • Low:

    • [I] Information disclosure that exposes random data to an attacker.
    • [T] Temporary modification of data in a specific scenario. Such modification does not persist after a restart of the system.
  • S - Spoofing.

  • T - Tampering.

  • R - Repudiation.

  • I - Information.

  • D - Denial of service.

  • E - Escalation of privilege.

Bug report creation

When a software user files a bug report, they must

  • assign a STRIDE category to the bug,
  • decide whether the bug is a client or server bug,
  • what scope the bug affects.

Users can be software developers and QA staff.

STRIDE stands for:

  • S - Spoofing

  • T - Tampering

  • R - Repudiation

  • I - Information Disclosure

  • D - Denial of Service (DoS)

  • E - Elevation of Privilege (EoP)

  • The relevant "scope" values are

    • Client - Spoofed trusted UI in common/default scenario
    • Client - Spoofed trusted UI in specific other scenario
    • Client - Spoofed UI as part of a larger attack scenario
    • Server - Spoofed specific user or computer over secure protocol
    • Server - Spoofed random user or computer over secure protocol
    • Client - Tampered trusted data that persists after restart
    • Client - Tampered data that does not persist after restart

From there, a matrix is created that assigns a level of severity to each combination.

Example entry in matrix (there can be several such entries):

  • STRIDE Category: Spoofing
  • Scope: Client - Spoofed trusted UI in common/default scenario
  • Description: Ability for attacker to present a UI that is different from but visually identical to the UI that users must rely on to make valid trust decisions in a default/common scenario. A trust decision is defined as any time the user takes an action believing some information is being presented by a particular entity, either the system or some specific local or remote source.
  • Severity Level: Important

Scratchpad

Note: Exception handling is a common location where privilege elevation can occur.

  • Identify roles
    • Application roles, duties and functions
    • Identify who can do what within your application
      • What can users do
      • What privileged groups and roles exist?
      • Who can perform sensitive functions?
      • What is supposed to happen
      • What is not supposed to happen
  • Identify application security mechanisms
    • Identify any key information you know about your application's security mechanisms
      • Input and data validation
      • Authentication
      • Authorization
      • Configuration management
      • Session management
      • Cryptography
      • Sensitive data handling
      • Parameter manipulation
      • Exception management
      • Auditing and blocking

Simple test run for Threat modeling

  • Choose the application service
  • What ports are open
  • What ENV vars are used
  • What does the container consist of
  • What is installed in the container
    • Can it be used as a jumping off point?
  • What libraries are used in the application

LLM prompts

LLM prompt for investigating the ENV vars

Role: You are a Senior Security Architect and Penetration Tester specializing in Cloud-Native applications and Secret Management.

Context: I am providing you with the source code and configuration files for a repository. I want you to perform a deep-dive security analysis focused specifically on how Environment Variables are used, stored, and managed.

Objective:

Identify threats using the STRIDE framework.

Calculate the risk of each identified threat using the DREAD framework.

STRIDE Focus Categories (Apply specifically to Env Vars):

Spoofing: Can env vars be manipulated to impersonate a service or user?

Tampering: Can an attacker modify env vars in transit or at rest to change app behavior?

Repudiation: Is there logging for when sensitive env vars are accessed or changed?

Information Disclosure: Are secrets being leaked in logs, error messages, or /info endpoints?

Denial of Service: Can malformed env vars cause the application to crash?

Elevation of Privilege: Can an attacker gain admin rights by injecting specific env vars?

DREAD Scoring Criteria (Scale 1-10 for each):

Damage Potential: How great is the damage if the vulnerability is exploited?

Reproducibility: How easy is it to reproduce the attack?

Exploitability: How much effort/skill is required to exploit it?

Affected Users: How many users will be impacted?

Discoverability: How easy is it to find the vulnerability?

Specific Areas to Audit:

Hardcoding: Look for default values in process.env or os.getenv calls.

Exposure: Check if env vars are printed to console or included in client-side bundles.

Validation: Check if the app validates the types and values of env vars on startup.

CI/CD: Analyze .github/workflows, Dockerfile, or docker-compose.yml for insecure secret passing.

Output Format:
Please provide a table for the STRIDE analysis, followed by a ranked list of risks using the DREAD scores. Conclude with a "Remediation Plan" listing high-priority fixes.

LLM prompt for investigating the storage

  • e.g. VPC for containers
  • or EBS for EC2 instances

what prompt can be given to an LLM to get the LLM to do a DREAD and STRIDE security analysis of the code in a repository with a focus on the Environment variables

LLM prompt for investigating network issues

Role: You are a Senior Network Security Engineer and Infrastructure Auditor specializing in Zero Trust Architecture and Container Security.

Context: I am providing you with the source code, infrastructure-as-code (IaC), and container configuration files for a repository. I want you to perform a deep-dive security analysis focused exclusively on Open Ports, Network Services, and Attack Surface Exposure.

Objective:
Identify network-level threats using the STRIDE framework.
Calculate the risk of each identified threat using the DREAD framework.
STRIDE Focus Categories (Applied to Networking):
Spoofing: Are there services vulnerable to DNS/ARP poisoning or lack of mutual TLS (mTLS)?
Tampering: Is sensitive data being transmitted over unencrypted ports (e.g., HTTP vs. HTTPS, Redis without TLS)?
Repudiation: Is there sufficient logging for successful and failed connection attempts to these ports?
Information Disclosure: Do exposed ports leak service versions, stack traces, or provide unauthenticated /metrics or /health endpoints with sensitive data?
Denial of Service: Are listening services susceptible to connection exhaustion, slowloris attacks, or large payload crashes?
Elevation of Privilege: Can an attacker use an exposed management port (e.g., JMX, Debugging, SSH) to gain shell access or move laterally?

DREAD Scoring Criteria (1-10):
Damage Potential: If this port is exploited, can the attacker reach the database or internal network?
Reproducibility: Can the port be reached from the public internet or just the VPC?
Exploitability: Is there a known CVE for the service version identified?
Affected Users: Does a service failure on this port cause a total system outage?
Discoverability: How easily would a port scan (Nmap) identify this as a high-value target?
Specific Files & Logic to Audit:

Container Configs: Check Dockerfile (EXPOSE instructions) and yaml files in the sub directories of the deployment directory (ports vs. expose mappings).

Orchestration: Analyze Kubernetes Service (NodePort/LoadBalancer), Ingress, and NetworkPolicy definitions.

Application Logic: Search code for app.listen(), server.bind(), or socket creation. Identify if they bind to 0.0.0.0 (all interfaces) instead of 127.0.0.1.

Cloud Infrastructure: Review Terraform or CloudFormation files for Security Group rules and "Any/Any" (0.0.0.0/0) ingress rules.

Default Ports: Flag the use of well-known ports for non-standard services or unencrypted defaults for databases (e.g., port 6379, 5432, 27017).

Output Format:
Provide a Network Exposure Map (listing every port found and its purpose), followed by the STRIDE Table and DREAD Risk Rankings. End with a "Network Hardening Roadmap."