SecurityQuickNotes - henk52/knowledgesharing GitHub Wiki
Introduction
Vocabulary
- 2FA - Two factor Authentication
- AES - Advanced Encryption Standard. 128, 192 and 256 bit.
- ASM - application security maturity
- Attack Surface - Represents the number of entry points you expose to a potential attacker.
- BSIMM - Building Security In Maturity Model. (leasons learned)
- CAPTCHA - Completely Automated Public Turing test to tell Computers and Humans Apart
- CC - The Common Criteria for information tehcnology security education
- CIS - Center for Internet Security, Inc.
- CISA - Cybersecurity and Infrastructure Security Agency
- CLASP - Comprehensive Lghtweight Application Security Process.
- CORS- Cross-Origin Resource Sharing.
- CSA - Cloud Security Alliance
- CSF - The Cybersecurity Framework
- CSP - Content Security Policy
- CVE: (risk management framework)
- CVSS: Common Vulnerability Scoring System. Standardized risk rating system.
- CWE - MITRE Common Weakness Enumeration group
- DAST - Dynamic Application Security Testing.
- DREAD: Damage potential, Reproducibility, Exploitability, Affected users, Discoverability. Risk rating scheme by Microsoft.
- Fault injection - Compile time or Run time.
- FIPS - Federal Information Processing Standard.
- Fuzzing - fault injection at run time.
- HVA - High Value Asset
- IDS - Intrusion Detecion System
- IPS - Intrusion Prevention Systems
- MFA - Multifactor Authentication
- NEAT - Necessary, Explained, Actionable, and Tested
- NIST - U.S. National Institute of Standards and Technology.
- NVD - National Vulnerability Database
- OCTAVE: Operationally Critical Threat, Asset and Vulnerability Evaluation.(risk management framework)
- OWASP - Open Web Application Security Project
- PA-DSS - Payment Application Data Security Standard.
- Penetration testing - can adapt to business logic and proprietary protocols.
- SAMM - Software A Maturity Model (OWASP)
- SAR - Security Assurance Requirements
- SATS - Static Application Security Testing.
- SDLC - Secure/Software Development Life Cycle
- Requirements stage:
- ID security and compliance objectives
- Establish security standards
- Perform risk assessments
- Consider threat paths/ risk profile
- Look into legal requirements.
- Design
- Analyze applications attack surface
- perform threat modeling
- document the applications security architecture
- Development
- Address security guidelines
- code reviews
- static analysis
- Testing
- perform dynamic analysis
- review the applications attach surface
- execute penetration tests
- Prep for Deployment
- performfinal security review
- create an incident response plan
- document compliance with the appropriate policies and standards
- Requirements stage:
- SEIM - Security Event and Incident Management (system/solutSpoofion)
- SFR - Security Functional Requirements
- SHA - Security Hashing Algorithm
- SOP - Same-origin Policy
- SSF Software security
- SSG: Software security Group
- SSI
- SSL - Secure Socker Layer. Badly broken, should never be used.
- Standards & Guidelines
- Facilitate Security effort.
- (Security version of DevOps)
- Breath - broaden scope
- Depth - improve level and detail of security effec
- Efeciency - go from fix to prevent
- STRIDE: A risk categorization scheme used at Microsoft.
- Spoofing - Can an attacker gain access using a false identity?
- Tampering - Can an attacker modify data as it flows through the application?
- Repudiation - Can an attacker erase proof of what happened. (Gaining deniability, or reducing the chance for someone to see something has happened, that should not. Like whether or not you can identify if a lock has been picked)
- Information disclosure - Can an attacler gain access ot private or potentially injourious data?
- Denial of service - Can an attacker crash or reduce the availability of the solution?
- Escalation of privilege - Can an attacker assume/gain the identity of a privileged user?
- Threat model:
- TLS - Transport Layer Security.
- Vaulnerability scan - pre-defined sets of scans, can scan proprietary protocols.
- VPN - Virtual Private Network
- XSS - Cross site scripting
References
Process
There are three categories:
- Every-Sprint Practices - Critical security practices that should be performed within every release
- Bucket Practices - security practices that are completed on a regular basis and are normally spread across the project lifetime
- One-Time Practices - security practices that are completed once at the start of an agile project
Special areas of consideration include:
- Security education
- At least one security course per year
- (eng211) security design guidelines
- Tooling and automation
- Threat modeling
- Fuzz testing
- Bug dense and "at-risk" code
- e.g. legacy code
- Exceptions
- Final security reviews
One-Time practices - getting started
- Core security training
- Identify Security Objectives
- Establish security requirements
- Create bug bar
- Perform Security and privacy risk assessment
- Identify privacy expert
- Identify security expert
- Create the Architecture diagrams
- Perform attack surface analysis/reduction
- Create Threat models
- Implement approved security tools
- Using the latest compiler
Core security training
- TODO what trainig does whom need?
- TODO Where can we find the training?
Every sprint
- Creating threat models
- performing static code analysis
- Updating threat models
- Fixing issues identified by code analysis tools
- Encoding all un-trusted web output.
- FSR
- all 'every-sprint' requirements have been completed.
- At least one requirement form each sub-bucket list has been completed
- No security bug that ranks higher than the designated sprint bug bar is open
- Conduct security deployment reviews(eng211)
Bucket practices
- Creating bug bars
- Conducting attack surface reviews
- Only one requirement from each bucket per sprint
- Product teams decide what tasks to address
- No requirement can be ignored
- Verification task
- Interface Fuzzing
- file fuzz testing
- Attack surface analysis
- Binary analysis
- penetration testing(eng211)?
- Design review
- Conduct a privacy review
- Review crypto design
- Assembly naming and APTCA
- User Account Control
- planning
- Create privacy support documents
- Update security response contacts
- Update network down plan
- Define/Update security bug bar
- update design guidelines(eng211)
- Guidlines must be:
- Actionable - Associated with a vulnerability that can be mitigated through the guideline
- Relevant - Associated with a vulnerability that is known to affect real applications
- Impactful - Represents key engineering decisions that will have wide-ranging impact; design mistakes can have a cascading impact on your development life cycle
- Guidlines must be:
Methodologies
Create the Architecture diagrams
This is required for 'Attack surface analysis' and 'Threat modeling'
(exception handling, this is one of the places where e.g. privilege elevatoin can happen)
Diagrams:
(Probably use some of the SW Arch Doc views for this)
-
Providing the overview
- Application components
- Purpose
- Get an overview
- how
- Uses style
- use twopi from graphviz.
- Uses style
- Purpose
- How components fit into the overall product ecosystem
- Context diagram
- use twopi from graphviz.
- Deployment diagram
- Allocation deployment style
- use: PlantUML Component
- Allocation deployment style
- Context diagram
- Application components
-
Enumerate all the interfaces
-
Interaction with other sevices
-
all incomming and outgoing traffic endpoints
- List open ports
-
network boundaries
-
security boundaries
-
Delivery pipeline
- What does the pipeline deliver, where and how
- Who has access and how
- (Deployment diagram)?
Create Application overview
- Draw the end-to-end deployment scenario
- Draw servers
- connections
- protocols
- Application stacks
- Clearly mark what is internet, intranet, single server etc
- list ports
- Authentications being used
- Identify roles
- Application roles, duties and functions
- Identify who can do what within your application
- What can users do
- What privileged groups and roles exist?
- Who can perform sensitive functions?
- What is suppose to happen
- What is not suppose to happen
- Identify key usage scenarios
- What are the important features of you application? What does it do?
- Identify the applications main functionality and usage
- With a primary focus on the create, read, Update and Delete functionality
- Also look at several scenarios happening simultanious
- Identify which scenarios are out of scope
- Identify technologies
- List the technologies and key features of the sw and platforms that you use
- Operating systems
- Web server software
- Application frameworks
- Database server software
- Development languages
- List the technologies and key features of the sw and platforms that you use
- Identify application security mechanisms
- Identify any key informaiton that you know about your appliation's security mechanisms
- Input and data validation
- Authentication
- Authorization
- Configuration management
- Session management
- Cryptography
- Sensitive data handling
- Parameter manipulation
- Exception management
- Auditing and blocking
- Identify any key informaiton that you know about your appliation's security mechanisms
Where to start
ENG301
-
Start with an overview.
- The overview diagram should accurately depict the application with just a few components.
-
Start with an external entity which drives the system.
-
Add the main components they interact with and the primary data store.
-
Connect components with appropriately labeled data flows.
-
Draw trust boundaries.
-
Once the overview diagram is created, add trust boundaries.
-
Iterate into detail as needed. As you diagram the security aspects of your application, you will often need to split a process in two in order to properly place a trust boundary or decompose a complicated process.
-
Document your assumptions.
-
Your diagrams should not be as detailed as a flowchart, class diagram, or call graph.
-
Each diagram should contain at least one trust boundary. Otherwise why are you drawing it?
-
Are items in the diagram numbered? If not, it's easy to miss some as you transfer the data to other documents.
-
Does data flow magically between databases? There needs to be a process in between.
-
Does data magically appear? Data originates from external actors, not from data stores.
-
Are there data sinks? Data is placed in a data store for a reason. Either the data is used by external actors or it's wasting disk space.
-
Are all elements on the diagram clearly labeled?
-
Are any data flows labeled with generic terms such as “read”, “write”, or “query”? If so, give them a more descriptive name.
Application components
Decompose your application
- Trust boundaries
- indicate where trust level change
- e.g. elevation required
- Start by defining the app outer boundaries
- Identify access control points or key places where access requires additional privileges or role membership
- Identify boundaries from a dataflow perspective
- For each subsystem, Consider whether you trust the upstream data flow or use case
- If not, consider how flow/input might be validated, authenticated and autorized.
- Example of boundaries
- Web server and db server
- firewall
- What other apps have access to the database?
- indicate where trust level change
- data flows
- Trace the apps data as is flows throught the app
- Start at the highest level
- What data is used where
- Trace the apps data as is flows throught the app
- Entry points
- Entry points are also attach points
- App entry points
- Internal entry points
- exit points
- Priotities the exit points where your application writes data that includes client input or includes data from untrusted sources such as shared databases
Attack surface analysis and reduction
eng311
Attack surface reduction is an attack risk reduction.
Measure attack surface over time.
You may lower the attack surface by reducing the amount of code the untrusted users can access. Making it hard for them to find code they can exploit.
Attack surface reduction can also limit the damage cause by an exploit.
Your attack surface it tightly linked to the entry points your systems expose.
The Attack Surface of an application is:
- the sum of all paths for data/commands into and out of the application
- the code that protects these paths (including resource connection and authentication, authorization, activity logging, data validation and encoding)
- all valuable data used in the application, including secrets and keys, intellectual property, critical business data, personal data and PII
- the code that protects these data (including encryption and checksums, access auditing, and data integrity and operational security controls).
You overlay this model with the different types of users - roles, privilege levels - that can access the system (whether authorized or not).
it is important to focus especially on the two extremes: unauthenticated, anonymous users and highly privileged admin users (e.g. database administrators, system administrators).
Go through every component of the solution and indentify the interface for
- Information going in
- Information going out
- This could leak data
- or be used of DOS?
Write the information in a sheet with the following coloumns:
- Component name
- Interface tag
- Interface type
- Protocol
- Comm type
- Primary direction
- Port
- Information type
- Source of discovery
Type of data being store: Confidential, sensitive, regulated etc.
How to find interfaces:
- Dockerfile export
- k8s deploy ports
- k8s service ports
- k8s ingress ports
- What other apps are on the container
- What do those apps have of open ports.
- Start the container and run netstat
Group each type of attack point into buckets based on
- risk
- external-facing
- internal-facing
- purpose
- implementation
- design
- technology
How to reduce the entry points
- Enumerat all
- All the interfaces
- all protocols
- code execution paths
- Understand trust levels required to access each entry point
- (All configs being loaded into the system)
- Pay attention to trust boundaries from which low-trust input may flow into your system.
- Follow the dataflow from entry point, through your code, to the final endpoint for that data.
- E.g. From API/UI -> App -> DB -> App2 -> UI.
- Follow the dataflow from entry point, through your code, to the final endpoint for that data.
- Ideally, you should have documentation that describes all the network services that are running on all the devices on your network
- network scanning tools might be helpful for identifying network services.
- By monitoring network activity, you can determine the most common attempted attack vectors against your systems over time and use that information to reduce your attack surface accordingly.
- Network monitoring also helps respond to attacks in progress and thus either block live attack traffic or minimize damage in the event of successful exploits or denial of service attacks.
Create bug bar
Creating the bug bar
When a software user files a bug report, they must assign a STRIDE category to the bug, whether the bug is a client or server bug, and what scope the bug affects. Users can be software developers and QA staff. STRIDE stands for:
- Spoofing
- Tampering
- Repudiation
- Information Disclosure
- Denial of Service (DoS)
- Elevation of Privilege (EoP)
- The relevant "scope" values are
- Client - Spoofed trusted UI in common/default scenario
- Client - Spoofed trusted UI in specific other scenario
- Client - Spoofed UI as part of a larger attack scenario
- Server - Spoofed specific user or computer over secure protocol
- Server - Spoofed random user or computer over secure protocol
- Client - Tampered trusted data that persists after restart
- Client - Tampered data that does not persist after restart
From there, a matrix is created that assigns a level of severity to each combination. The possible values for level of severity are:
-
Critical
-
Important
-
Moderate
-
Low
-
None
-
Critical
- Impact accross the enterprise and just the local LOB application/resources
- Exploitable vulnerability in deploymed production application
-
Important
- Exploitable security issue
- Policy or standards violation
- Affects local application or resources only
- Risk rating = High Risk
-
Moderate
- Difficult ot exploit
- Non-exploitable du to other mitigation
- Risk rating = Medium risk
-
Low
- Bad Prctice
- Non-exploitable
- Not directly exploitable, but my aid in other attacks
- Risk rating = minimal risk
Example entry in matrix (there can be several such entries):
- STRIDE Category: Spoofing
- Scope: Client - Spoofed trusted UI in common/default scenario
- Description: Ability for attacker to present a UI that is different from but visually identical to the UI that users must rely on to make valid trust decisions in a default/common scenario. A trust decision is defined as any time the user takes an action believing some information is being presented by a particular entity—either the system or some specific local or remote source.
- Severity Level: Important
See:
recognize when to accept risk
ENG301
- Critical:
- [E] Remote elevation of privilege by anonymous users.
- [E] Remote execution of arbitrary code by anonymous users.
- Important:
- [D] Remote denial of service that can be easily exploited by anonymous users.
- [E] Remote elevation of privilege by authenticated users.
- [E] Remote execution of arbitrary code by authenticated users.
- [I] Information disclosure that allows attackers to obtain information from anywhere in the system.
- [S] Spoofing of a specific computer or user.
- [T] Permanent modification of any user data or data used in making security decisions in a common or default scenario. Such modification is to persist after a restart of the system
- Moderate:
- [D] Denial of service that requires extensive efforts by anonymous or authenticated users
- [I] Information disclosure that allows attackers to obtain information from known locations in the system. Such locations are not intended to be exposed.
- [S] Spoofing of a random computer or user.
- [T] Permanent modification of any user data or data used in making security decisions in a specific scenario. Such modification is to persist after a restart of the system.
- Low:
- [I] Information disclosure that exposes random data to an attacker.
- [T] Temporary modification of data in a specific scenario. Such modification does not persist after a restart of the system.
Design review
Architecture and design reviews
eng211
The goals of performing an architecture and design review are to analyze application architecture and design from a security perspective, and to expose high-risk design decisions that have been made.
Organize your reviews by common application vulnerability categories and look at the areas in which mistakes are most often made and can have the most security impact.
For each vulnerability category, keep best practices in mind. Guidelines and best practices will help you discover gaps in your design or areas where mistakes are being made.
Each vulnerability category should have its own set of potential problems that you can check against. These checklists represent areas where mistakes are most often made. A checklist driven approach will ensure consistent high-quality reviews over time and will give you confidence you are achieving a high degree of coverage. As you perform your reviews, the checklists can grow based on additional issues you find and knowledge you acquire along the way.
The goals of performing an architecture and design review are to:
- Analyze application architecture and design from a security perspective.
- Expose high-risk design decisions that have been made.
- All entry points and trust boundaries are identified by the design.
- Input validation is applied whenever input is received from outside the current trust boundary.
- The design assumes that user input is malicious.
- Centralized input validation is used where appropriate.
- The input validation strategy that the application adopted is modular and consistent.
- The validation approach is to constrain, reject, and then sanitize input. Looking for known, valid, and safe input is much easier than looking for known malicious or dangerous input.
- Data is validated for type, length, format, and range.
- The design addresses potential canonicalization issues.
- Input file names and file paths are avoided where possible.
- The design addresses potential SQL injection issues.
- The design addresses potential cross-site scripting issues.
- The design does not rely on client-side validation.
- The design applies defense in depth to the input validation strategy by providing input validation across tiers.
create a list of questions about the security of each of the following three application components:
- Deployment and infrastructure
- review the design of your application in relation to the target deployment environment and the associated security policies.
- Host, Network
- review the design of your application in relation to the target deployment environment and the associated security policies.
- Security frame
- review the critical areas in your application, such as authentication, authorization, input/data validation, exception management, and other areas.
- Input validation, Authentication, ...
- review the critical areas in your application, such as authentication, authorization, input/data validation, exception management, and other areas.
- Layer-by-layer
- Finally, walk through the logical tiers of your application, and evaluate security choices within your presentation, business, and data access layers.
- Presentation, Business, data
- Finally, walk through the logical tiers of your application, and evaluate security choices within your presentation, business, and data access layers.
Identify Security Objectives
- Confidentiality - Data can only be accessed by those with auth
- What
- How
- Encryption - making the data or transmission of data unreadable to unauthorized users.
- Authentication - limiting access to resources on a ‘need to know’ basis, generally implemented as privilege
- Authentication involves knowing who your users are. They need to prove who they are: using
- something they know,
- something they have,
- or something they are.
- Authentication involves knowing who your users are. They need to prove who they are: using
- Authorization - limiting access to resources on a ‘need to know’ basis, generally implemented as privilege
- Integrity - The gurantee that the data is in its original state, without modification
- How
- Checksums and hashes
- Digital signatures
- Encrypted transport
- Validate
- data - that the data has not been changed inappropriately, either accidentally or deliberately.
- source - who sent the data and where the data came from. This would require a system be in place that allows me to validate that data chain.
- What(eng150)
- Consistency
- Accuracy
- Trustworthiness
- Validity
- How
- Availability - users always able to access service
- What
- Available
- Server uptime
- Secure
- DoS resistance
- Redundant
- Backups
- Available
- What
Use your security goals to:
-
Filter the set of applicable design guidelines
-
Guide threat modeling
-
Scope and guide architecture and design reviews
-
Help set code review objectives
-
Guide security test planning and execution
-
Guide deployment reviews
-
What do you not want to happen.
-
What to protect
-
Protect intangible assets
e.g.
- Prevent attacks from obtain data
- Meet service-level agreements
- Protect the company's credibility
ENG150
Integrity
Data pedigree is a record of the ancestry of data items as well as metrics of their estimated reliability and confidence.
Meeting integrity requirements:
- Prevent authorized users from making improper or unauthorized modifications
- Prevent unauthorized users from making modifications to data or programs
- Maintain internal and external consistency of data and programs
- Did the data actually come from the person or entity you think it did?
- How easy is it to fake data pedigree?
Sarbanes-Oxley (commonly called SOX)
Availabilty
- Component redundancy
- Resistance to denial-of-service (DoS) attacks
- Fault-tolerant and failure-tolerant software
- Software patching / upgrading
- Communication Bandwidth
- Hardware maintenance
Confidentiality
Privacy = Confidentiality + Anonymity
- How much information
- How logging system works
- How access controls handled
the importance of controlling
- how an asset is stored,
- who can access an asset,
- under what circumstances.
confidentiality is achieved by limiting information access and disclosure to only authorized users and preventing access by or disclosure to unauthorized users.
Establish security requirements
Questions to ask(eng211):
- Tangible Assets to Protect
- Are there user accounts and passwords to protect?
- Is there confidential user information (such as credit card numbers) that needs to be protected?
- Is there sensitive intellectual property that needs to be protected?
- Can this system be used as a conduit to access other corporate assets that need to be protected?
- Intangible assets to protect
- Are there corporate values that could be compromised by an attack on this system?
- Is there potential for an attack that may be embarrassing, although not otherwise damaging?
- Compliance requirements
- Are there corporate security policies that must be adhered to?
- Is there security legislation you must comply with?
- Is there privacy legislation you mu* st comply with?
- Are there standards you must adhere to?
- Are there constraints forced upon you by your deployment environment?
- Quality of service requirements
- Are there specific availability requirements you must meet?
- Are there specific performance requirements you must meet?
Legislation in the US
- Sarbanes Oxley(SOX): penalties for exposing or falsifying financial data.
- Gramm-Leach Bliley ACT(GLBA): Protect consumers' personal financial information held by financial institutions.
- General Data Protection Regulation(GDPR): personal data protection.
- Health Insureance Portability and Accountability Act(HIPAA)
- Health Information Technology for Economic and Clinical Halth Act(HITECH): requirements for sec and priv of health care info.
- Payment Card Industry Data Security Standard(PCI-DSS)
- Payment Aplication Data Security Standard(PA-DSS)
Ask the questions:
- What specific assets need to be protected?
- What are your compliance requirements?
- What are your quality-of-service requirements?
- What data is considered confidential?
Threat modelling
Implementing the MS SDL Threat modeling tool
ENG-195
-
Threat models - what threats can affect your software. Classify threats and prioritize vulnerabilities.
- Attacker centric - anticipate what an attacker might do
- Software centric - identify potential attacks agains each element of the software design
- Asset centric - examine the assets managed by an application(seneitive information, intellectual property, etc)
- Complete threat models for all functionality identified during the cost analysis phase
- Hold a design review with your privacy subject matter expert if your privacy subject matter expert has requested one, you want confirmation that design is compliant or to request and exception.
-
Threat model: The process of decomposing a systems architecture and identifying key structural elements and system assets which are valuable resources to be protected
- system entry and out points
- data and control flows
- security mechanisms
- and potential attackers to highlight applicable risks and associated attacks against the system.
-
Attack tree: Top-down approach for decomposing risks into detailed attacks to visualize the set of all possible scenarios enabling a given risk to be realized.
-
Attack Vector: The path through the system that an attacker will use to carry out an attack.
Purpose of threat modeling
- Produce software that is secure by design.
- Think about and discuss product security in a structured way.
- Allow the development team to predictably and effectively define security problems early in the process.
- Document and share application security knowledge.
- Get a new perspective in order to overcome “creator blindness”.
- Inform the secure development process.
- Heighten awareness of the customer’s security perspective.
Common benefits of threat modeling:
- Think beyond canned attacks
- Identify Top-N lists, attackers and Doomsday Scenarios.
- Doomsday scenrario express extreme situations that could threaten your organization or even cause it to go out of business.
- You should build your software to account for these situations and to avoid or mitigate them.
- Identify where threat agents exist relative to the architecture.
- including insiders.
- Identify components that need additional protection.
- Highlight assets, risks and flaws in your system's design.
- Determine which components are likely to be targeted by attackers and how they will be attacked.
- Put additional security in or remove the functionality.
- Determine whether business or security objectives can be met.
Threat model sheet
Should containe:
- Tag - An ID number for the threat.
- Title - Short descriptive title to sum up the issue.
- Reference to attack surface
- Description of the threat.
- Agent - what type of agent could execute this threat.
- Category - STRIDE category.
- Objective affected - What objecte/entity/component is affected.
- Outcome - what can happen in case this threat is successfully executed.
- This can be a list of multiple things.
- Vulnerabilities? - (saw it in a sheet on the internet)
- Controls? - (saw it in a sheet on the internet)
- Ticket - tag or reference to the ticket system used for tracking these issues.
- Status
- New - Arrived and no one has looked at it.
- Backlog - when priority has been assigned.
- Open - being investigated.
- Blocked - Waiting for external response.
- In review -
- Closed - Addressed or dropped.
- Ignored? - wont fix?
- Risk - See DREAD calculations below.
- Dread -
- Damage - how bad would an attack be?
- This has to be judged in the context of your app. For a server app, any sort of crash is probably a bad thing, whereas if notepad crashes non-exploitably, that's an anklebiter attack, not really a vuln.
- Impact?
- Rating:
- High:
- Loss of service availability(How many it affects is defined in 'Affected users')
- Exposing of customer data(confidentiality)
- False information can be sent to valid customers(integrity)
- Loss of X% of yearly income.
- Medium:
- Loss of login ability <2s
- Loss of Y% of yearly income.
- Low:
- Loss of Z% of yearly income.
- High:
- Reproducibility - how easy it is to reproduce the attack?
- how easy it is to reproduce the attack?
- High:
- Easy to reproduce, can be described in less than two pages of instructions.
- Medium:
- Requires description of the logic like state machines or significant protocol knowledge
- Low:
- Even the admins/devs have a hard time reproducing this
- High:
- how easy it is to reproduce the attack?
- Exploitability - how much work is it to launch the attack?
- Worst case is anonymous access, and it can be scripted.
- Such things can be turned into worms.
- Best case is that it requires a fairly high level of access, such as authenticating as user, or it requires a special configuration, or maybe an information leak.
- Maybe you have to guess a certain web page's path and name, and then launch it.
- I've seen exploits that took a penetration tester an hour or more to set up by hand.
- Rating:
- High:
- No access or priviledge needed.
- Medium:
- You have to be member of a small(10-15?) group.
- Low:
- It has to occur on in a short(5s) timing window that occurs randomly less than every 2 months.
- High:
- Worst case is anonymous access, and it can be scripted.
- Affected users - how many people will be impacted?
- Rating:
- High:
- More than 15% of our customers.
- Medium:
- 1-15% of our customers
- Low:
- Less than 1% of our customers.
- High:
- Rating:
- Discoverability - how easy it is to discover the threat?
- perhaps the most controversial component, but good information to have.
- My advice is to not take comfort if something is low discoverability, but to get really alarmed if it is high.
- Rating:
- High:
- Something that's highly discoverable is publicly known, or very similar to something that is publicly known.No access or priviledge needed.
- Medium:
- Dev or admin has access to the knowledge.
- Low:
- Low discoverability is that it takes intimate knowledge of the internal workings of your app to sort out.
- This one is really risky, since what you thought was hard to sort out could get published on BUGTRAQ tomorrow.
- Low discoverability is that it takes intimate knowledge of the internal workings of your app to sort out.
- High:
- mitigation
- Explanation of the mitigation. Explain what the mitigation is and why it works.
- Validation?
- Investigation notes - Include things you learned while investigating.
- Entry points that are impacted? - This may be "see description" to reduce redundancy.
- Protected resources? - for which access is affected by particular mitigations. This might be in the form of "see description" to minimize redundancy.
DREAD calculations
Rate all entries:
- 1 - low
- 2 - medium
- 3 - high
Base severity(1-5) = Damage + RA_Bonus RA_Bonus: 2: R+A > 4 1: R+A > 3 0: otherwise
Explore and Discovery addition. 4: E = 3 and Di = 3 3: E = 3, Di = 2 3: E = 2, Di = 3 2: E = 3, Di = 1 2: E = 1, Di = 3 1: E = 2, Di = 2 0: rest
- Base severity: =I2+IF((J2+L2)>4,2,IF((J2+L2)>3,1,0))
- ExDi: =IF(AND(R2=3,V2=3),4,IF(AND(R2=3,V2=2),3,IF(AND(R2=2,V2=3),3,IF(AND(R2=3,V2=1),2,IF(AND(R2=1,V2=3),2,IF(AND(R2=2,V2=2),1,0))))))
Vector Prevalence Detectability Impact Probability Rating
Creating threat models
- 1 For each entry in the 'Attack surface' document think through what part of the STRIDE model is relevant for the entity
- Common STRIDE part relevant to elements/entity-types
- External entity: SR
- Spoofing
- Repudiation
- Process: STRIDE
- Spoofing - Can an attacker gain access using a false identity?
- Tampering - Can an attacker modify data as it flows through the application?
- Repudiation - Can an attacker erase proof of what happened. (Gaining deniability, or reducing the chance for someone to see something has happened, that should not. Like whether or not you can identify if a lock has been picked)
- Information disclosure - Can an attacker gain access to private or potentially injourious data?
- Denial of service - Can an attacker crash or reduce the availability of the solution?
- Escalation of privilege - Can an attacker assume/gain the identity of a privileged user?
- Data store: T(R)ID
- Dataflow: TID
- External entity: SR
- Once threats have been identified using the STRIDE model, these threats should be further investigated in order to determine all the necessary conditions that would turn them into successful attacks.
- Common STRIDE part relevant to elements/entity-types
- 2 Create attack trees for relevant threat models.
draft
- Create design documents
- Define and Evaluate your Assets
- Create an information flow diagram
- Define Trust Boundaries
- Identify Threat Agents
- Map Threat agents to application Entry points
- Define the Impact and Probability for each threat
- Rank Risks
Threat models
-
Complete threat models for all functionality identified during the cost analysis phase
-
Ensure all threat models meet minimal threat model quality requirements
-
All threat model and reference mitigations must be reviewed and approved by at least one developer, one test and one program manager.
-
Confirm that threat model data and assoiciated ddocumentation is stored using a document control system.
-
The person managing threat modeling efforts should complete training beforehand.
-
Any design change request(DCRs) should trigger an assement of the changes to help ensure
- new threats or vulnerabilities are not introduced
- existing mitigations are not degraded.
-
Create an individual work item for each vulnerability list in threat models
-
Follow MS NEAT security user experience to improve security warnings
- https://www.microsoft.com/security/blog/2012/10/09/necessary-explained-actionable-and-tested-neat-cards/
- https://www.sadev.co.za/content/security-hard-users-so-let-us-clean-neat-spruce
- N: Necessary – Only show messages that you need. If you can take a safe action automatically or defer the message, do that!
- E: Explained – If you do interrupt the user, explain in everything to the user. EVERYTHING?! Yes, and the SPRUCE acronym will help explain what verything is.
- A: Actionable – A message should only be presented to the user if there is steps the user can take to make the right decision.
- T: Tested – A security message needs to be tested. TDD, Usability Testing, Visual Inspection, every test.
- SPRUCE:
- S: Source – Why are we showing this message? Did a website do something or a file or a user action? Tell the user.
- P: Process – Give the user the steps they need to go through to make sure they make the right decision.
- R: Risk – Explain what the consequences of getting the decision wrong.
- U: Unique – If your software knows everything, do the right thing automatically.
- So if you are showing the message, it means the user has unique information that is needed to make the decision.
- Explain what information is needed (slightly similar to P).
- C: Choices – Show the user all the options and recommend the safer one.
- E: Evidence – Provide any additional information that the user may need to make the decision.
-
Threat : unwelcome event og person/system from which an attack can originate.
-
Bug: An implementation-level software problem that may exist incode but will never be executed; relatively easy to discover and to remedy.
-
Flaw: An architectural or design-level problem that can result in serious security issues and that can be much more expensive to fix than implementation-level errors.
OWASP: Open web application security project.
Input to Threat Modeling
- Architecture diagrams
- Deployment/configuration guides
- Source code
- Penetration test reports
- Users guides
- Interviews; with
- Busines analyst
- Tech proj lead
- Architect
- Lead developer
- Application component developer
- Build engineer
- Requirements engineer
- QA engineer
- Product support engineer
- Network specialist
- Requirements specifications
- use cases
- Actually using the running system
Step in threat modeling
- Learn as much as possible about the target of analysis
- Discuss security issues surrounding the software.
- Determine the likelihood of compromise.
- Perform impact analysis.
- Rank risks
- Develop a mitigation strategy.
- Report findings (This looks like the tool used for rating sw issues.)
Not a threat modeling methodology:
- OCTAVE: Operationally Critical Threat, Asset and Vulnerability Evaluation.
- CVSS: Common Vulnerability Scoring System. Standardized risk rating system.
- STRIDE: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privilege. A risk categorization scheme used at Microsoft.
- DREAD: Damage potential, Reproducibility, Exploitability, Affected users, Discoverability. Risk rating scheme by Microsoft.
Attack resistance analysis
Asses the systems ability to withstand known types of attacks:
- Identify general categories of risks.
- Map Attack patterns to each of the identified risks using attack or vulberability checklists.
- Identify architecture elements that could be affected by these attacks.
- Determine if the controls placed around the identified elements are sufficient to thwart the corresponding attacks.
- The Seven Pernicious Kingdoms
- The 24 deadly sins of software security
- Web application sins 1 SQL injection 2 Web server-related vulnerabilities(XSS, XSRF, response splitting) 3 Web client-related vulnerabilities(XSS) 4 Use of Magic URLs, Predictable Cookies and hidden form fields.
- Implementation sins 5 buffer overruns 6 Format string problems 7 Integer overflows 8 C++ catastrophes 9 Catching exceptions 10 command injection 11 Failure to handle errors correctly 12 Information leakage 13 Race conditions 14 poor usability 15 not updating easily 16 executing code with too much privilege 17 failure to protect stored data 18 The sins of mobile code
- Cryptographic sins 19 use of weak password-based systems 20 Weak random numbers 21 Using cryptography incorrectly
- Networking sins 22 Failing to protect network traffic 23 Improper use of PKI, especially SSL 24 Trusting network name resolution
- OWASP top 10
- Sans top 25
Underlying framework weakness analysis
- 1 Identify the underlying sw components used.
- 2 Then ask:
- Are there known vuls in the component version, being used?
- Are there any security controls being provided by the framework that are insuffucuent for our system?
- Is the component secured by default or must it be configured?
- Can the component be configured to be secure or must some other control be in place to use it securely?
Ambiguity analysis
- trust modeling
- Trust zones
- Data sensitivity modeling
- Threat agent modeling
When to perform threat modeling
- Requirements and use cases:
- Abuse cases
- Security requirements
- Risk analysis
- Architecture and design
- Risk analysis
- secure architecture
- Test plans
- Risk-based security tests
- Code
- Code review(tools)
- Test and test results
- Risk analysis
- Penetration testing
- feedback from the field
- Penetration testing
- Security operations
Actors of the threat modeling process
- Business stakeholder
- Business goals
- use cases
- reqs
- Software architect: Informin about SW spec, arc and low-level design
- Spec
- High lvl arch
- Low-lvl design
- Security architect
- Spec
- High lvl arch
- Low-lvl design
- Threat modeling/ARA
- Sec std
- Sec arc
- Attack patterns
- Vuln code
- Dev. Lead
- Low-lvl design
- Code
- QA. Lead
- Running system
- Sec analyst
- Threat modeling/ARA
- Attack patterns
- Vuln code
- Known vuln
- Review code
- Pen testing
- Code
- Running system
Fundamentals of threat modeling
Eng206
- Identify and evaluate application threats and vulnerabilities.
Focus on the overall approach
- 1 Scope the model
- use scenarios
- also define what is out of scope
- Use exsiting documentations
- e.g. start modeling on a whiteboard
- Iterative approach: Start small then add to the model as you gain more knowledge
- also do what-if
- When you make engineering decissions then revisit the model
- Adapt the acticities for you application
- Have information:
- protocols
- network set-up
- etc
- Identify security objetives
- Application overview
- Decompose your application
- Identify threats
- Idenitfy Vulnerabilities
Your document should contain:
- Security objectives
- Key scenarios
- Protected resources
- Threat list
- Vulnerability list
Test should happen against vulnerabilities
Identify threats
- Conduct informed brainstorm activities, with dev, tests, architects and security profesionals and system administrators
- Identify common threats, e.g.:
- Server
- path traversal
- Denial of Service
- SQL injection
- XSS
- User
- Malware
- Brute force
- Spam
- Then idenitfy how this could apply to your application
- Or questionary with goal driven
- Server vulnerable to identity spoofing
- Data vlunerable to tampering
- Is sensitive information exposed in error messages?
- Examin the app layer by layer, tier by tier and feature by feature
- Server
- Threats along use cases
- Examine each use case and how someone tcoulde abuse it
- How can a client inject malicious input here
- is data written based on unvalidated input
- How could an attacker manipulate session data
- How could an attacker obtain sensitive data
- How could an attacker bypass authorization checks
- How does data flow from the front end to the back end of your application
- Which component call which componnet
- What does valid data look like
- How si the data constrained
- How is data validated against expected length, range, format and type
- Where is validation performed
- What sensitive data is passed between components and across networks
- How is that secured while in transit
- Examine each use case and how someone tcoulde abuse it
- Identify common threats, e.g.:
Idenitfy Vulnerabilities
- Consider the app layer by layer
- prefereably use existing model or checklist
- STRIDE - Microsoft
- Focuses most the threat end
- also useful in enumerating vulnerabilities
- CAPEC(Common Attack Pattern Enumeration and Classification)
- Comprehensive matrix of attacks
- useful for idnetifying vulnerabilities as well as mitigating and detecting vulnerabilities in production applications
- Dictionary of attack patterns
- ATT&CK
- Enumerate dozens of attacks under twelve major categories
- includes models for different server models
- STRIDE - Microsoft
How to create an app sec threat model
ENG301
-
Design
- Complete the Threat Models:
- Complete the threat models for the entire application, including all new functionality.
- File Bugs:
- File bugs for threats that have insufficient or no mitigations.
- Document Mitigations:
- Document the mitigations to the identified threats in functional and/or design specs, as appropriate.
- Conduct Threat Model Reviews:
- Conduct threat model reviews to ensure wide dissemination of the threats within the product team.
- Complete the Threat Models:
-
Overview of the process
-
- vision - how the app can be used and abused.
- (this seems to be the objectives, requirements etc)
- Defining Guarantees - will help identify a set of baseline requirements for your application that should be kept in mind throughout its development.
- Developing Scenarios - will help identify the environments your application will be used in and the audience that will be using it.
- Understanding these environments and audiences is crucial to understanding the threats the application might face.
- Identifying Security Features and Properties - will help identify both use cases and abuse cases for various features of the application.
-
- model - describes the applications functionality
- Entities
- Processes (circle)
- data flows (lines)
- Function call
- Network traffic
- data stores (parallel horizontal lines)
- Database
- File
- Registry
- Shared Memory
- Queue
- Stack
- external entities (box/square)
- people
- other systems
- data feeds
- events
- notficitations
- trust boundaries (dotted lines)
- User mode to kernel mode transitions
- Integrity level transitions
- Computer-to-computer communications
- Inter-process communications
- COM interfaces
- Calls to external APIs
-
- threat identification - use the model to identify potential threats to the application and the assets it uses.
- For each entity/attack surface, think through what part of the STRIDE model is relevant for the entity
- Common STRIDE part relevant to elements/entity-types
- External entity: Spoofing, Repudiation
- Process: STRIDE
- Data store: T(R)ID
- Dataflow: Tampering, Information Disclosure, Denial of Service
- Once threats have been identified using the STRIDE model, these threats should be further investigated in order to determine all the necessary conditions that would turn them into successful attacks.
- Using threat trees has proven to be very successful at doing that. A high level threat is listed at the tree root, and then child nodes are added based upon what events or conditions would enable that threat.
-
- mitigation - for each threat document existing mitigations ad identify gaps where controls or mitigations are weak or non-existing
-
- validation - validate the accuracy of the model, threats and mitigations
-
Who is involved when?
- Developers are involved during the Model, Mitigate, and Validate phases.
- They use their detailed knowledge of coding and software implementation to discuss mitigations and end up with a better understanding of specific threats their code faces.
- Testers are involved during the Model, Identify Threats, Mitigate and Validate phases.
- They use their understanding of the application's environment to guide threat development and end up with a better understanding of how to test based on these threats.
- Program Managers are involved throughout the threat modeling process and typically drive it.
- They ensure completion and accuracy of the threat model deliverables and drive their usage throughout the software development life cycle.
- Architects are involved in the Vision phase where their background on the application domain helps identify general threats and issues.
- Security Advisors are involved mainly during the Validate phase, but also participate during the Mitigate phase.
- As a security expert, the security advisor is the authority who signs off on the threat model, threats, and corresponding mitigations.
All applications, components, and services should be threat modeled. The objective is to model the application or system as a whole, including:
- Application security-relevant features.
- Privacy-related features.
- Features where a failure would result in an application security concern.
- Features that interact with external or un-trusted processes.
threat identification
Once threats have been identified using the STRIDE model, these threats should be further investigated in order to determine all the necessary conditions that would turn them into successful attacks. Using threat trees has proven to be very successful at doing that. A high level threat is listed at the tree root, and then child nodes are added based upon what events or conditions would enable that threat.
To construct a threat tree, first start out by listing a high level threat. Next ask what specific sub-threats or conditions could enable that threat and list them as child nodes to that threat. Repeat this process for each of the new nodes. Stop when sufficient detail is provided for a threat, or when a particular threat can no longer be dissected.
e.g.
- Tampering with process
- Corrupt state
- input validation failure
- Access to memory
- Tampering with subprocess
- provide false credentials
- Failure to check call chain
- callers
- callees
- spoofing external entity
- Failure to check call chain
- Corrupt state
Mitigate
There are four basic mitigation methods to address threats. Listed in order of preference, they are:
-
- Redesign the code or application to eliminate the threat (the threat no longer applies).
- Redesigning is a good place to start the threat mitigation process as it is the only way to truly eliminate a threat.
- Start by asking yourself whether particular functionalities are really necessary. If not, remove them. If so, design them with security as a guiding principle.
-
- Implement well-understood threat mitigation techniques.
- Choose a design known to mitigate most threats. For every problem you struggle with, chances are many thousands of other engineers have struggled with it too.
- talk to your security advisor;
- Investigate what other applications have done in similar situations;
- use the Microsoft SDL Threat Modeling Tool and consider the recommendations made by this tool.
-
- Invent new threat mitigation techniques.
- This is an inherently risky and time-consuming alternative which can easily affect other properties of an application, such as performance or scalability.
- This alternative should primarily be used for new types of problems for which there are no pre-existing mitigation techniques or when the application design or specific requirements prohibit the use of standard mitigation techniques.
- Proceed with caution and ensure the work gets checked by the security advisor assigned to your project.
-
- Do nothing and accept the risk.
- consult the bug bar
MS SDL default mitigations
- Spoofing-Authentication:
- To authenticate principals (users or machines):
- Basic authentication
- Digest authentication
- Cookie authentication
- Kerberos authentication
- Public Key Infrastructure PKI systems such as TLS and certificates
- IPSec
- Digitally signed packets
- To authenticate code or data:
- Digital signatures
- Message authentication codes
- Hashes
- To authenticate principals (users or machines):
- Tampering-Integrity:
- Windows Mandatory Integrity Controls
- Access Control Lists (ACLs)
- Digital signatures
- Message Authentication Codes (MACs)
- Repudiation-Non repudiation:
- Strong Authentication
- Security logging and auditing
- Digital Signatures
- Secure time stamps
- Trusted third parties
- Information Disclosure-Confidentiality:
- Encryption
- ACLs
- Denial of Service-Availability:
- ACLs
- Filtering
- Quotas
- Authorization
- High availability design
- Elevation of Privilege-Authorization:
- ACLs
- Group or role membership
- Privilege ownership
- Permissions
- Input validation
Validating the threat model
- Validating the Model:
- To validate the threat model, you must ensure that it fully describes the application and that it accurately portrays potential threats.
- The set of diagrams composing the threat model should be signed off by development leads, test leads and program managers.
- Diagrams should adhere to certain guidelines; for example, Your diagrams should not be as detailed as a flowchart, class diagram, or call graph.
- Each diagram should contain at least one trust boundary. Otherwise why are you drawing it?
- Are items in the diagram numbered? If not, it's easy to miss some as you transfer the data to other documents.
- Does data flow magically between databases? There needs to be a process in between. Does data magically appear? Data originates from external actors, not from data stores.
- Are there data sinks? Data is placed in a data store for a reason. Either the data is used by external actors or it's wasting disk space.
- Are all elements on the diagram clearly labeled? Are any data flows labeled with generic terms such as “read”, “write”, or “query”? If so, give them a more descriptive name.
- Validating the Threats:
- Ensure that for each element on the threat diagram, there is a corresponding set of threats.
- Does each process element have an associated threat? Does each data store element have at least one threat for each of T, R, I, and D?
- Adding numbers to the threat diagram can be very helpful here, especially when there are numerous threats for a given element.
- Validating the Mitigations:
- For each mitigation, note whether it is a standard or custom-made mitigation.
- For all custom-made mitigations, document why a standard mitigation was not used. This will allow effort to be focused on whether the custom-made mitigation works as intended.
- Each mitigation should be tracked with the project's issue/bug tracking system.
- It is recommended to have developers investigate the design of each mitigation, then hand it off to test, and finally close the item when the design has been correctly implemented and tested.
- Tags can be used to track issues identified by threat modeling.
- All trust boundaries should be evaluated for suitability for fuzz testing.
- Testers should parse the threat models carefully looking for areas where other such tests make sense.
- Testers should also focus on disconnects between specified behavior and actual behavior as this is often where security issues are found.
- Consider what assumptions are being made in the specifications, as well as in the product.
- Testers should create test cases to validate all documented assumptions since bad assumptions can lead to vulnerabilities.
- Once all identified threats and mitigations have been properly tested, there should be no remaining paths of attack.
- At this point, it is useful to re-check for any new threats that might not have been originally identified.
- Validating Dependencies and Assumptions:
- Throughout the threat modeling process, consider what assumptions are being made regarding external modules.
- A hypothetical assumption might be “HTTP.sys protects against cross-site scripting”. This assumption is actually false; HTTP.sys does not protect against cross-site scripting.
- A good way to validate assumptions is to talk with the owners of each module pertaining to the assumption, for example the owner of HTTP.sys.
- A good way to start the discussion is by asking for their threat models and to provide them with your assumptions.
- Throughout the threat modeling process, consider what assumptions are being made regarding external modules.
Updating the threat model
Threat models will need to be updated for two main reasons: the application changes, or knowledge about a threat changes. Significant application changes are considered to occur when the attack surface changes, when the specifications or use-scenarios change, or when the code no longer matches the threat model diagram.
Threat modeling Document
Threat model documentation not only provides your team with a thorough understanding of external threats facing your application, but also provides a basis for penetration testing activity, code review and even end-user education. SDLC Final Security Reviews (FSRs) tend to go much more smoothly when there are good threat model documents available.
- Required Sections
- Threat model information
- Diagrams
- Threats and mitigations
- Dependencies
- Assumptions
- External security notes
- Optional Sections
- Scenarios
- Protected resources or assets
Threats and Mitigations
It is important to remember that threats are often permanent, and may continue to be present even if you have mitigated them. Documenting the threats that affect the application and how they have been mitigated will improve the threat modeling process in future revisions of the application. The threats and mitigations section consists of a series of tables with one threat and mitigation per table. Each of the tables should include the following items:
- An ID number for the threat.
- The name of the threat.
- The name of the element that is impacted and on what diagram(s) it appears.
- Description of the threat.
- The STRIDE type.
- Whether or not the threat is mitigated.
- Explanation of the mitigation. Explain what the mitigation is and why it works.
- Investigation notes. Include things you learned while investigating.
- Entry points that are impacted. This may be "see description" to reduce redundancy.
- Protected resources for which access is affected by particular mitigations. This might be in the form of "see description" to minimize redundancy.
Dependencies
The goal of the dependencies section is to enumerate your application's dependencies, explain why they are necessary, and to provide any security notes that were produced while selecting them. Dependencies should be listed when:
- A failure in the dependency can lead to a security flaw.
- There is a choice between dependencies. Explain how and why a specific dependency was chosen.
- Each entry in the dependencies section requires the following items:
- An ID for the item.
- The name of the dependency.
- An explanation of how and why the dependency was chosen.
Assumptions
The purpose of the assumptions section is to identify any security assumptions that have been made while gathering requirements and creating diagrams. Keeping track of security assumptions provides additional information for determining whether a threat has been successfully mitigated.
- Each entry in the assumptions section requires the following items:
- An ID for the item.
- The name of the assumption.
- A description of the assumption.
External Security Notes
External security notes are necessary for customers to know how to use an application safely. Use this field to record anything you think will help any external evaluator to understand the threat models.
- Each entry in the External Security Notes section requires the following items:
- An ID for the item.
- The name of the security note.
- The contents of the security note.
Note: This is information should be propagated to the person or team in charge of creating the documentation for the application or service.
Scenarios
The scenarios section is an optional place to enumerate important security scenarios. The value of this section is to serve as a reminder to threat model each scenario's unique features and to help others, who have not had prior introduction to the application, understand its key functionality.
- Each entry in the scenarios section requires the following items:
- An ID for the scenario.
- A brief name for the scenario.
- A description of the scenario.
Note: Try to avoid referring to other documents in the scenarios unless those documents will be easily available to other teams. In such cases it is still preferred to include a brief description. Remember the scenarios section is optional and may be omitted.
Protected Resources or Assets
There are two ways in which this optional section can be used.
-
Protected resources are aspects of the system whose protection is critical, such as the SAM or the TPM keys.
-
Assets are things which an attacker has reason to steal or deny access to, such as a customer database, source code of your application, or network connectivity.
-
Each entry in the Protected Resources or Assets section requires the following items:
- An ID for the item.
- The name of the asset or resource.
- A description of how the asset or resource can be compromised or misused by an adversary.
Note: Remember the Protected Resources or Assets section is optional and may be omitted.
STRIDE
Emergency response
ENG301
- Consider the highest priority tasks
- consulting your Emergency Response or Crisis Management Plan;
- contacting the “on-call” marketing and engineering security team;
- and diagnosing and triaging the vulnerability.
- Then consider the high priority tasks
- creating and testing the fix;
- ensuring that the fix does not affect other features of the application;
- distributing and propagating the fix;
- and monitoring that the fix is working.
- Then the follow-on tasks would include
- determining what went wrong and changing the process so similar issues can be prevented in the future.
Perform Security and privacy risk assessment
Risk Management approach:
- Provides a holistic view of all your applications and their risk exsposure.
- Categorizes and prioritizes the applications you need to protect.
- Helps focus your security budget, resources and activities on highter risk items first, most bang for the buck.
- Ultimately reduces costs by guiding you to the most effective, well-targetd security activities and solutions.
The four phases of risk management_
- Asset identification
- List all the assets that the organization want to protect.
- For app security that means listing all the apps that are deployed, both 1st party and 3rd party apps.
- Other assets could include: Financial asssets, reputation, it infrastructure etc.
- Data to collect:
- Application name, vendor, description/purpose, internal owner and contact information, data originally implmented, last major change or update, current version, number of users
- Thecnical specification - OS, code base size and location, development languages, deployed environment and dependencies(such as other apps and middleware)
- business areas and functions supported(e.g. web service, CRM, Enterprise-wide resource management, financial transactions)
- Data processed or stored - e.g. proprietary company info, payroll info, creadit card info(PCI), PPII, HIPAA, other info protected by low, other customer sensitive data, other business sensitive data.
- Interfaces and users - record the interfaces and types of users to which the application is exposes such as: internet-facing, intranet-facing, customer-facing, partner-facing, vendor-facing, internal/employee-facing.
- Application risk
- Risk rank, when it has been assigned.
- Risk analysis
- Indentify and prioritize high-risk applications on which to conduct in-depth security assessments and risk analysis.
- Two important factors in risk analyzis:
- Impact - What are the consequences of if it happened.
- Probability - How likely is this to occur.
- Risk score = Impact x Probability
- Risk calculation
- Impact/probability matrix: Probability on the X-axis, Impact on Y-axis. Low/Medium/High on both axis.
- Red(first): High/High, High/Medium
- Yellow(second): High/Low, Medium/Medium
- Green(third): Low/Low, Medium/Low.
- Impact/probability matrix: Probability on the X-axis, Impact on Y-axis. Low/Medium/High on both axis.
- Well-formed risk statements:
- Impact - What is the impact to the business
- Asset - What are you trying to protect
- Personal info, PCI, Authentication credentials, classified data, patented data, business-ciritcal data ...
- Threat - What are you afraid of happening?
- Negative financial impact, loss of operations, reulatory or legal action, danger of life or health, violation of internal policies or principles, damage to reputation
- Asset - What are you trying to protect
- Probability - How likely is the threat given the controls?
- Vulnerability - How could the threat occur?
- Interfaces and users (see 'asset identification' for list)
- Architecture
- Is native code executed on the client device
- Where will the application be deplyed
- Does the application implement any kind of authentication or authorization, if yoes provide additional details.
- Is this application a plug-in or extension for another applications, if yes, what apps will be using this plug-in
- Is there connectivity with other applications?
- Mititgation - What is currently reducing the risk?
- Vulnerability - How could the threat occur?
- Impact - What is the impact to the business
- Rank the risks.
- Tier1: High
- Highly sensitive data with regulatory compliance requirements
- Long life span
- Internet facing
- Business critical functionality
- Tier 2: Medium
- Medium sensitivity data with no compliance requirements
- Medium-to-long lifespan
- Intranet facing
- Business important functionality
- Tier 3: Low
- Low sensitivity data
- Short lifespan
- No authentication or authorization required
- Low importance functionality
- Tier1: High
- Look at the policies for each risk tier, below.
- Risk mitigation
- Reduce risk -
- Avoid risk -
- Transfer risk - Transfer to a third party, e.g. letting your cloud provider handle the nework DoS attack floods.
- Accept risk -
- Risk monitoring
- Monitor the solutions over time.
- Ensure that changing conditions are taken into account.
- Ensure the risk remains manageable.
- Ensure that the chosne solution is still the best one.
- When you monitor risk, Choose an easy metric. Numeric values work best.
Policies for risk tiers
- High
- Security champion
- Full security training curriculum
- Security design and coding standards
- Threat model
- Design review
- Code review(automated and manual analuysis)
- Privacy review
- Deployment review
- Medium
- Security awareness training
- Security design and coding standards
- Threat model
- Code review(automated scanning)
- Penetration test(automated scanning)
- Deployment review(as appropriate)
- Low
- Penetration test(automated scanning)
Risk rank
- Gather inventory informaiton about each application
- Define data criticality
- Measure application attack exposure
- Prioritize your resources
Perform attack surface analysis/reduction
Attack Surface Analysis is about mapping out what parts of a system need to be reviewed and tested for security vulnerabilities.
The point of Attack Surface Analysis is to understand the risk areas in an application, to make developers and security specialists aware of what parts of the application are open to attack, to find ways of minimizing this, and to notice when and how the Attack Surface changes and what this means from a risk perspective.
Attack Surface Analysis helps you to:
- identify what functions and what parts of the system you need to review/test for security vulnerabilities
- identify high risk areas of code that require defense-in-depth protection - what parts of the system that you need to defend
- identify when you have changed the attack surface and need to do some kind of threat assessment
The Attack Surface of an application is:
- the sum of all paths for data/commands into and out of the application
- the code that protects these paths (including resource connection and authentication, authorization, activity logging, data validation and encoding)
- all valuable data used in the application, including secrets and keys, intellectual property, critical business data, personal data and PII
- the code that protects these data (including encryption and checksums, access auditing, and data integrity and operational security controls).
You overlay this model with the different types of users - roles, privilege levels - that can access the system (whether authorized or not).
it is important to focus especially on the two extremes: unauthenticated, anonymous users and highly privileged admin users (e.g. database administrators, system administrators).
Group each type of attack point into buckets based on
- risk
- external-facing
- internal-facing
- purpose
- implementation
- design
- technology
Identifying and Mapping the Attack Surface
You can start building a baseline description of the Attack Surface in a picture and notes. Spend a few hours reviewing design and architecture documents from an attacker’s perspective. Read through the source code and identify different points of entry/exit:
- User interface (UI) forms and fields
- HTTP headers and cookies
- APIs
- Files
- Databases
- Other local storage
- Email or other kinds of messages
- Run-time arguments
- ...Your points of entry/exit
The total number of different attack points can easily add up into the thousands or more. To make this manageable, break the model into different types based on function, design and technology:
- Login/authentication entry points
- Admin interfaces
- Inquiries and search functions
- Data entry (CRUD) forms
- Business workflows
- Transactional interfaces/APIs
- Operational command and monitoring interfaces/APIs
- Interfaces with other applications/systems
- ...Your types
You also need to identify the valuable data (e.g. confidential, sensitive, regulated) in the application, by interviewing developers and users of the system, and again by reviewing the source code.
Addressing Attack surface
- Least privilege
- what
- Reduces the attack surface
- Limits capability after a succesfull attack
- how
- Using limited-user account context
- removing write priv for web app users
- conf fw to only allow https (or HTTP)
- Setting file permissions that prevent modification of web content files
- methode
- Start with nothing;
- Segment your application; eaysily role base thingy
- Grant temporary priv and revoke upon completion
- Have stakeholder buy in
- what
Concept and technologies
Tools
- Vulnerability scans
- API
- Preprogrammed to known SW
- No findings isn't the same as your application is free of any issues
- Penetration testing - you attack an application to identify and exploit its vulnerabilities.
- Penetration tests are performed by actual security experts who use both custom and off-the-shelf tools and techniques.
- Unlike vulnerability scanners, penetration testers can adapt to custom protocols and business logic.
- Static analysis tools
- Pros
- Cons
- Detect implementation flaws only
- Code review
- Pros
- Cons
- Time consuming
- Fateque
- Different reviews, different areas of expertice.
Where does Fuzzing fit in?
OWASP ZAP
- docker run -u zap -p 8080:8080 -p 8090:8090 -i owasp/zap2docker-stable zap-webswing.sh
- Start the container.
- Start the container/app you want to scan
- Browse to:
http://localhost:8080/zap/
- Enter the URL of your app in the Quick Start tab.
Practical processes
Design practical processes
Creating secure application architecture
DES311
Simplicity
-
Encapsulation - aka data hiding
-
Abstraction - All members and methods of a class should be hidden except ones that must be exposed to the code outside of the class for the class to be used as intended.
-
Modularization - a programming paradigm that means implementing the code as a set of reusable modules rather than as a single monolithic structure.
- e.g. use crytographic modules
-
Layering - makes it simpler to implement and make changes to enterprise applications.
- From a security perspective, layering provides clear boundaries for implementing validation and authorization checks.
- Each layer can perform validation checks on data it receives from the other layers as a first line of defense in order to filter out exploitation attempts.
- Each layer can also perform authorization checks to make sure that the functionality that is requested from another layer is allowed.
-
Defense in Depth - is the practice of layering defenses and provides added protection to your system. Using this information security principle, you construct as many barriers as possible between the attacker and your business‐critical resources.
-
Principle of least privilege - you give your principal only those privileges that it needs to satisfy its requirements.
- A principal can be a user, an application, a network host, or some other type of entity that performs actions within your information system.
-
Compartmentalization - helps ensure that a breach of one component does not mean a breach of the entire system or network. As you access different components, you can require re‐authentication, or that data be revalidated.
-
secure by default - you should not depend on users to create a secure install. Instead, you should provide them with one. In general, many users do not change default settings, which has led to security issues in the past. Remember, your defaults are likely to become a standard that others will use.
-
fail secure - An application must fail securely on any error or exception, including out of memory, full disk, and so on. When you write exception handlers, you should also think about the state the application could be in at the time of the exception.
- What happens when the application is in a failure state
- what happens when a user enters invalid data?
- This means not only what the software should do, but also what it should not do, and which data needs to be cleaned up, including temporary files, sensitive data in memory or media, and information in memory dumps and debug logs.
-
Psychological Acceptability - security controls should be implemented in a way that does not adversely affect the user experience.
- Usability requires additional development to provide a polished interface and satisfactory user experience while addressing security challenges and issues. If a security control is unnecessarily intrusive, then it is not an appropriate security control.
-
Avoid security by obscurity
- Reverse engineering an application can expose some of its weaker areas, allowing an attacker to compromise the application more easily.
- Attackers can reconstruct source from a binary by using a number of different tools.
- Social engineering can yield useful information to an attacker that you don’t want them to have.
- Obscurity should never be considered to contribute significantly as a line of defense, but it could bolster otherwise well‐designed security. It should be approached as part of an overall strategy, not a primary method to prevent attacks.
-
Don't reinvent the wheel
- Reusing code can make development easier and improve security, while reinventing code incurs additional cost, increases development time, and is likely to introduce bugs.
- Custom implementations of existing standard algorithms and protocols may contain subtle bugs.
-
Protect the weakest link
- Generally, attackers will spend the least amount of effort to penetrate the system.
-
Reduce attack surfaces
-
Input validation
- All input should be considered untrustworthy and as having the potential to cause harm.
- Use whitelisting on data validation filtering - Only pass data through if it is on you whitelist and drop all else
- When you receive input, do not process it until after it is validated. You should validate all of your input data using the whitelisting technique, where you specify what valid data is like, rather than attempting to define bad data.
- All input should be considered untrustworthy and as having the potential to cause harm.
-
Auditing and logging
- support common logging formats and log the right information.
- make the amount of data we log configurable, to enable multiple levels of logging.
- should also be able to audit all the security‐related operations and provide reports.
- If the application crashes, we should log the state of the application, and any corresponding user input, which may help determine the specific reason for the crash.
- you may want to support logging off‐host through the platform framework (such as syslog).
- Information that should always be logged includes authentication, file reads and writes, and application starts and stops.
- This data provides forensic and compliance information.
- you must be careful when logging sensitive information, including user data, memory dumps, and transaction information.
- Auditing can be a difficult exercise, but it can help you track down bugs, identify attack patterns over time, and identify weaknesses in your application.
- failed attempts may help identify the attacker.
- If you don’t audit your logs, you remain oblivious to all your security compromises and will not be able to assess your current security issues.
-
Learn from your mistakes
- to avoid repeating your mistakes, you can perform a root‐cause analysis (RCA) for every discovered security bug.
- When bugs are submitted by testers or from the field, you should look at the bug and determine the vulnerability in your source code that caused the issue. Then, you can think of the test that you could have performed to find that issue. Once you’ve identified the issue, you can also find it during code review. By using code reviews and regression testing, you should be able to remove the bug so that users never see it again.
- to avoid repeating your mistakes, you can perform a root‐cause analysis (RCA) for every discovered security bug.
Enhancing security
Attack surface reduction
Reduction
- Turn features off that are only used by a low percentage of users.
- Off by default.
- Don't install by default
- When concidereing which features shoul be on by default, use the 80/20 rule: If 80% of your users do not use the feature, turn it off or don't install it by default.
- Make enabling it a conciouse choice.
- This way, an exploit on the typically disabled feature will most likely affect a smaller percentage of your users base.
- manage how users obtain access to restricted features.
- Code
- Disable code, by default, that are only rarely used.
- Remove code that are no longer used.
- Don't give apps/code higher access privileges than required.
- Reduce entry points accessibility to untrusted users.
- Once you understand that data flow, determine the types of users who can gain access to the entry point.
- Are you allowing anonymous users or authenticated users?
- Are you allowing anyone to call your API, or is it locked down in some way?
- If it’s possible to raise the authorization bar, do so by only allowing access to certain types of users and roles.
- apply the Least Privilege principle to reduce the privileges of your running code.
- Disable unused network services
- Allow only valid traffic trhough the network perimeter
- Encrypt network traffic
- Reduce network visibility
- Separate network into segments
- you should disable any unused network services, especially ones that are facing untrusted networks, such as the Internet
- Any service that is not actively used should be disabled.
- If it is not possible to disable an unused network service, access to it should be blocked by other means, such as a firewall.
- Use encryption where possible
Reduce Attack Surface at the OS level
- Take a minimalist approach
- Spreadsheet of necessaryy vs. unnecesarry software packages or libraries.
- Disable unnececassary software packages or libraries
- Create a spreadsheet of all software and code libraries installed on each system.
- Determine whether each software package or library is necessary for the system to function properly and remove those that are unnecessary.
- For each remaining software package, examine features that can be disabled, and disable the features that are unnecessary.
- Strictly limit user accounts and disable or rename default accounts.
- Disable all unused accounts
- Limit user account privileges of the remaining accounts to the minimum necessary for job functions. Perform this task both at the operating system level and for each application that manages user accounts.
- Configure all access control lists to provide minimum access for the system to function properly.
- Limit access to the minimum necessary for job functions. Perform this task both at the operating system level and for each application that implements access controls.
- Establish strong password policies for the OS and all installed applications.
- Use a packet filter or firewall to restrict access and isolate the machine on the network.
- Keep the system up-to-date with the latest operating system, server, and other software patches.
- Review OS settings that can improve system security.
- Consider using a hardening guide or tool appropriate for your operating system.
- Document all configuration settings required to set up each system, with an emphasis on security-sensitive features.
- Ensure that proper system auditing and log file management is in place.
- Review all the logging options available by both applications and the operating system and configure them as appropriate for your environment.
- Avoid installing software development and debugging tools on the server.
- Install anti-virus and other security software as appropriate.
- Ensure that the server is physically secure.
Reduce attack surface of web servers
- install only the modules or services necessary for your application
- use appropriate file and directory permissions to strictly control access to web content directories,
- Ensure that the server is physically secure.
- Disable directory browsing
- review web server settings that can improve platform security.
- Remove default, demo, backup, temporary, and other directories not appropriate for a production server.
- Remove, rename, or restrict IP address access to administrative directories.
- Disable or reconfigure error reporting features so that users never see detailed error messages,
- disable or block HTTP methods not needed for your application.
- Modify server headers to not reveal server platform and version.
- Review script interpreter and application framework settings to ensure that proper limits and security settings are in place.
- Consider using a hardening guide or tool appropriate for your web server and application framework.
Reduce attack surfaces for database servers
- Remove or disable unnecessary database features or services.
- Strictly limit user accounts and disable or rename default accounts.
- Use a packet filter or firewall to tightly restrict access to database ports.
- Remove any demo, testing, training, and all other databases not necessary for the web application.
- Carefully configure user roles and permissions to strictly limit access for web application accounts.
- Never use DBA, root, or system accounts for general database access.
- Consider using a hardening guide or tool appropriate for your database platform.
- Disable stored procedures that are not required for the application.
- Ensure that the server is physically secure.
Reduce attack surfaces at the network level
- Whitelist in the firewall rather than blacklist.
- Prevent unnecessary access to internal services from untrusted networks
- A server not accessible from untrusted networks should not be discoverable from untrusted networks
- Do not expose information about your infrastructure through public records
- Domain name system records
- Information on your web sites
- Information in e-mail headers
- Any other side channels
- To reduce attack surface using network segmentation, group hosts that legitimately need to communicate with each other into segments, and then isolate the segments from each other as much as possible.
- Sometimes, hosts in one segment will need to communicate with hosts in another segment.
- In such a situation, firewalls or other controls should be used to make sure that only valid network traffic can pass from one network segment to another, in a very similar way as firewalls are used to prevent malicious traffic from untrusted networks from entering the organization’s networks.
- Sometimes, hosts in one segment will need to communicate with hosts in another segment.
- it is also possible to use network authentication services to only allow authenticated and authorized hosts to access networks.
- Network authentication services can also be used to only allow authenticated and authorized hosts to have any network access at all and can be used to further reduce network attack surface.
Deep packet inspection
-
Deep packet inspection looks at the contents of connections to identify the protocols being used and possibly the types of data being transmitted.
- instead of trying to block all protocols that you are not using, configure deep packet inspection to only allow the protocols that are legitimately used by your organization.
-
Identify protocols
-
Blocks unused protocols
-
Allow legitimate protocols
Malware scanning
- Inspect network traffic for known malware signatures
- Scan e-mail attachments
- Enable frequent and automatic signature updates
- Are often not very effective against custom targeted attacks
- Even with frequent updates, malware scanners are often not very effective against custom targeted attacks against your organization, because such attacks often employ malware and exploits that have been scrambled to avoid detection by malware scanners.
AI enabled attack recognition
- Classify network traffic to detect anomalies
- Algorithms applied to normal traffic to train an artificial intelligence
- AI used to monitor live traffic and generate alerts or block traffic
- Commonly employed to identify DoS attacks
Coding practicces
Coding practices with Databases
COD141
-
Give each app its own user account
-
Each user must have a unique account.
-
Handle the default accounts, securely.
- Views and stored procedures
- Data masking
- coloumn-level security
- row-level security
- Network isolation
-
Database hardening
- Always follow best practices for account access, passwords, and permissions.
- Keep both the operating system and database application up to date with the latest security fixes.
- Disable all unnecessary or unused services.
- Remove any demo or example database tables, code, or other objects that might be installed by default.
- Disable sensitive stored procedures that may allow for command execution.
- Ensure that both the operating system and database application have proper audit controls and threat detection facilities.
-
Transparent encryption - Transparent data encryption (TDE)
- Automatically provides at-rest encryption for your data. It’s called transparent because once set up, your applications don’t have to deal with encrypting or unencrypting data—the database takes care of everything.
-
Cloud databases
- Cloud databases can take advantage of robust identity and access management (IAM) features
- Cloud platforms often provide extensive facilities for managing encryption, encryption keys, and other secrets
- The virtualized nature of cloud infrastructure and easily configured virtual private networks allows for easy database isolation
- Encrypted tunnels and other secure communications are often an intrinsic part of a cloud infrastructure that extends to the database
- Cloud platforms providers work hard to ensure compliance with various regulatory security and privacy standards
-
Database Authentication and Access Control
- The primary threat to a DBMS is unauthorized access. Proper database authentication and access control is required. Things to consider are:
- How does the database platform handle authentication?
- Which authentication methods & protocols you will make available?
- Which users and roles require database access?
- What default accounts does the DBMS create?
-
Database Privileges and Limiting Data Access
- How does the database platform handle authentication?
- Which authentication methods & protocols you will make available?
- Which users and roles require database access?
- What default accounts does the DBMS create?
-
Concealing Data and Data Security
-
Protecting Your Database through Hardening and Network Isolation
- Databases are often the primary target for network intruders. Protect the database from any unauthorized type of network access by Network Isolation and Database hardening.
-
Encryption and Cloud Databases
Securing Ruby scripting
COD266
- Generel
- Validate command-line parameters
- Use quotation marks correctly
- Discuss techniques for handling return codes and exceptions
- Apply umask to set default file permissions
- Canonicalize paths to identify the correct files
- Identify dangerous functions to avoid
- Apply techniques for preventing or mitigating injection vulnerabilities
- Recognize that regular expressions must be handled carefully
- Describe techniques to protect sensitive data
- Secure temporary files
- Create a separate directory and secure it with filesystem permissions. Use this directory for temporary files.
- Generate (pseudo-)random strings and use them for temporary file or directory names to assure that it is difficult for an attacker to predict the names of the temporary files.
- Remove the temporary files before the script exits.
- Add programmatic checks to the script to make sure that only the intended temporary files are deleted.
- Canonicalizing file paths
- Canonicalization is a technical term for writing out file names and paths in full without using any links, periods, or extra slashes. If a file path is not canonicalized, then an attacker might use special characters to access resources outside of what is allowed by the application.
- Canonicalizing file paths before validating them helps prevent unauthorized access.
- Once a file name or path has been converted to its canonical form, it can be reliably validated.
- Once you canonicalize the path, you must continue to use the canonicalized value, and NOT the value originally provided by the user.
- If the script follows malicious symlinks, it might contain path traversal or privilege escalation vulnerabilities.
- By using relative paths or alternative character encodings, operating systems can address the same filesystem object via many different strings.
- While convenient, this functionality has been historically abused by attackers to gain unauthorized access to systems by passing to applications paths to objects that they should not be able to access. This vulnerability is usually called path traversal.
- Format string vulnerabilities
- do not place untrusted data into format strings—use static format strings instead.
- An attacker that can control the contents of a format string might be able to cause a Denial of Service condition, information leaks and/or change the value of script variables.
- Search for code that uses formatted input-output functions; that is, for printf and sprintf functions.
- Make a spreadsheet referencing these uses including the line numbers so that this code is easy to find in later code review.
- Review each piece of this code and rewrite it to not use sprintf or printf.
- If this is not a realistic option, then at least rewrite the code to use static format strings that do not include untrusted data.
- SQL Injection
- SQL Injection (SQLi) vulnerabilities are created whenever applications concatenate untrusted input with SQL syntax to form SQL queries.
- All input should be validated for length, range, type, and format before being processed by the application as the first line of defense.
- While strong input validation will prevent most exploit code from being used by the application altogether, input validation should be used as a defense-in-depth measure, not as a complete defense by itself.
- validate input to filter out metacharacters from untrusted data.
- to use functions that explicitly separate command names and arguments to prevent concatenation of malicious data into command structures.
- query = client.prepare(”SELECT id,username FROM users WHERE username=?”)
- results = query.execute(username)
- Command injection
- is one of the most dangerous and common vulnerability types. It is easy to find and exploit; its severity cannot be overstated.
- CWE-78
- Validating input is a defense-in-depth measure that makes it more difficult to pass exploit code to the application.
- Certain functions can be used to prevent concatenating untrusted data into names of shell commands to be executed. Using these functions is the main method for preventing command injection vulnerabilities.
- It is not enough to just use such functions—they must be used to separate untrusted data from any string that is used to determine what commands are being executed, and other arguments.
- Use built-in functions and libs instead of shell commands, where possible.
- Validate all input
- Use functions that separate command names and arguments.
- Code injections.
- CWE-94
- Code Injection gives the attacker full control of the application.
- To prevent Code Injection vulnerabilities, validate all input and avoid using dangerous functions, such as eval().
- Do not use 'eval()'
- The danger of using these functions is that, under certain conditions, an attacker might be able to supply malicious code that will then be executed as a part of the application.
- Using encoding or escaping as an application security control is notoriously challenging.
- Preventing concatenation using functions that explicitly separate syntax and data is always superior to encoding.
- A poorly written encoding routine is better than nothing, but it is better not to concatenate data into sensitive contexts than to encode with weak routines.
- RegEx DoS
- Regular expression denial of service attacks result in the vulnerable application taking up a lot of CPU time and/or memory and potentially causing the system to become unresponsive.
- To prevent regular expression denial of service attacks caused by malformed data passed to RegEx, validate the length of data before passing it to regular expressions. It is of course better to write regular expressions that are not vulnerable to DoS, but there is no simple way to identify all such regular expressions. An expression that might not be vulnerable on one engine will be vulnerable on another.
- One clear thing to avoid, however, is grouping with repetition. Grouping with repetition looks like (regex1)regex2, where regex1 is any regular expression pattern that matches strings of variable length, and regex2 is any wildcard or repetition operator.
- Protect sensitive data in transit
- To protect sensitive data in transit, developers should ensure that sensitive data is protected end to end during transmission by using Transport Layer Security (TLS)-ncrypted communications.
- Use only the latest TLS version
- Use only known, valid certificates.
- To protect sensitive data in transit, developers should ensure that sensitive data is protected end to end during transmission by using Transport Layer Security (TLS)-ncrypted communications.
Perform security code review
ENG312
-
- identify the objectives to determine the scope and focus of the review
- Find out what areas are relevant for the code under review:
- See also: https://owasp.org/www-project-top-ten/
- Injection - Untrusted data is sent to an interpreter as part of a command or query. DBs
- Authentication - allowing assuming other users id or compromise passwords, keys or session tokens.
- Exposing sensitive data.
- Access control - Restrictions on what authenticated users are allowed to do are often not properly enforced.
- Security misconfiguration - commonly a result of insecure default configurations, incomplete or ad hoc configurations
- Cross-site scripting(XSS) - occur whenever an application includes untrusted data in a new web page without proper validation or escaping, or updates an existing web page with user-supplied data using a browser API that can create HTML or JavaScript.
- Insecure deserialization.
- Using components with known vulnerabilities - (hopefully this is addressed with TWistlock and WhiteSource)
- Insufficient logging and monitoring - Allows attacker do get further before being detected.
- identify the scope of the code review
- This identifies what is rele
- Make sure that you know what you are looking for and what you are not looking for.
- E.g. only looke for SQL injection if you are using SQL DBs.
- Consider the types of problems
- Consider vulnerability types related
- If you have a threat model then find the threats that are relevant to the code under review.
-
- a preliminary scan
- Finding the first vulnerabilities, using static analysis tools.
- then reviewing hotspots closely to find additional vulnerabilities.
-
- Update 'Attack Surfaces' list, if relevant
-
- Update 'Threat models' list, if relevant
-
- Review for common security issues
- use the code review objectives developed in step one to drive the review and help you find vulnerabilities more effectively.
- first focus on the hotspots discovered in the preliminary scan in step two.
- Look at the threat model to see if there are any code vulnerabilities.
- Use data flow analysis to trace data from source to sink in your application, and then analyze the trust boundaries.
- The starting point for data flow analysis is to enumerate all the sources of input. (This you should have found in the attack surface step)
- Input sources can include public interfaces, users, APIs, user interfaces, and trusted and shared databases. Input sources can also include open sockets, files, and pipes.
- Use control flow analysis to review code execution paths.
- Step through the logic of the code
-
- Review for vulnerabilities that are unique to the application’s design
- Consider what is unique about your application and what types of vulnerabilities may exist.
- If you have a threat model, you can review for unique vulnerabilities based on the documented threats as well as mitigations in the design or implementation.
- Look for custom security code, code that uses cryptography, and review your authentication and authorization models.
- Unlike other areas of the product, a functional bug here is very likely to result in a security vulnerability.
- If you’ve written code to solve a security problem, any mistake in that code is likely to cause security issues.
-
- Post-review activities include:
- Prioritize vulnerabilities
- Learn from mistakes
- Update your best practices as needed
-
identify the scope of the code review.
-
Always break your code review into manageable chunks
-
Always remember it is more effective to review code iteratively.
- Review your code continuously, as often as possible, and make code reviews a part of the development process.
-
Setting time limits so you can move on makes a security code review much more effective.
-
You should ensure that you review only for security during the security code review.
- Attempting a standard functional code review simultaneously won’t be as effective; you should focus on security.
-
in addition to creating the list of inspection questions, you should also identify patterns of problems that occur repeatedly in your code.
- It is a good idea to conduct training relevant to addressing the types of vulnerabilities that are discovered repeatedly to make sure that all team members can effectively prevent these types of issues in the future.
-
create coding standards to address these problems, and make sure that you update them regularly.
- Your development team can use these standards to prevent problems in the code before they occur.
Preventing XSS
COD361
You can mitigate XSS with two steps:
- validating input
- Bear in mind that there is no safe way to place untrusted data in some web application contexts, such as directly within script tags or Hypertext Markup Language (HTML) comments.
- To validate input, use whitelist validation to only allow matching data.
- Blacklist validation can only be used as a supplementary measure to detect specific evasion techniques.
- encoding output correctly for the context.
- To encode output for the correct context,
- first identify the context that untrusted data will appear in,
- then use trusted encoding libraries to convert special characters in output to their appropriate safe representations.
- To encode output for the correct context,
There are several types of XSS vulnerabilities, such as Reflected, Stored and DOM-based, but the mitigations for these types are the same – validating input and encoding data correctly for the output context in which it will appear.
One of the greatest threats affecting HTML5 applications is cross-site scripting (XSS). Cross-site scripting is a widespread type of injection attack that exploits the trust of a web application in order to harm users. In an XSS attack, an attacker injects scripts and other malicious content into pages that others can view.
This attack might occur if an application fails to filter user input and properly encode the application’s output. Using a cross-site scripting attack, an attacker can access and modify the structure, appearance, and behavior of browser content. It might also transparently redirect the user to a completely different site that contains similar, but malicious content.
With cross-site scripting, an attacker can execute client-side scripts, perform actions on a site on the client’s behalf, and access sensitive session and cookie information. An attacker can also access private client information and spy on actions that users perform on your web site. What makes cross-site scripting particularly dangerous in the context of HTML5-capable browsers is that it allows an attacker to inject HTML5 code into any web application or modify existing HTML5 code, taking advantage of HTML5’s advanced features and potentially bypassing other security measures.
Preventing cross-site request forgery(CSRF)
COD361
Mitigating CSRF vulnerabilities is accomplished by employing one of several common programming patterns
- most notably the synchronizer token pattern
- The synchronizer token pattern means storing a unique token in the user session on the server-side and placing this token in pages displayed to the user.
- When valid requests are sent by the user, the token is included in the requests sent by the user’s browser and compared to the value stored in the server-side session.
- If the values match, that means that the request is legitimate. If the values do not match, it means that the request is not valid and should be considered a potential attack.
- The synchronizer token pattern means storing a unique token in the user session on the server-side and placing this token in pages displayed to the user.
- One alternative to the synchronizer token pattern is the double-submit cookie pattern.
- The double-submit cookie pattern is an alternative to the synchronizer token pattern that is used for situations in which storing a token in the server-side session is for some reason impractical.
- Such situations are rare in practice.
- The pattern itself relies on storing a token in a browser cookie and sending this value in request parameters.
- The server then validates that the value in the cookie and the value in the request parameter are the same.
- If they are not the same, then the request is forged and should be considered an attack.
- The double-submit cookie pattern is an alternative to the synchronizer token pattern that is used for situations in which storing a token in the server-side session is for some reason impractical.
The tokens used in both the synchronizer pattern and the double-submit cookie pattern should be cryptographically strong so that attackers cannot guess them.
Mitigating CSRF vulnerabilities is accomplished by employing one of several common programming patterns, most notably the synchronizer token pattern. One alternative to the synchronizer token pattern is the double-submit cookie pattern.
vulnerability called cross-site request forgery or CSRF causes a user to submit a malicious request under their own authenticated user context. With CSRF, rather than trying to steal your session token, attackers attempt to trick your browser into performing some action for them. The additional interactivity provided by HTML5 applications provides additional venues of attack for attackers to attempt to generate requests on your behalf within the application. A cross-site request forgery attack involves an attacker directing the victim’s browser to send a request to a site where the victim is currently logged in, in order to perform an action on the site on the behalf of the victim. One common attack vector is to embed the exploit URL into HTML content, such as the source attribute of an image tag.
For example, an attacker could redirect your browser to a form-submission page and change the email address on your home-banking account to an address under their control. Cross-site request forgery attacks result from the stateless nature of the protocol used by web applications, Hypertext Transfer Protocol (HTTP).
Preventing Clickjacking
COD361
To prevent clickjacking attacks, you should use sandboxing with iframes, and implement Content Security Policy (CSP) to define available features.
You should also use the X-Frame-Options HTTP header to indicate to browsers not to display your content in frames.
A clickjacking attack, sometimes called a UI redress attack, hijacks user clicks on a web page.
In clickjacking, an attacker overlays a transparent button, link, or other web element over the original button or link on a web page.
This overlay intercepts any clicks intended for the original user interface. The interactivity features provided by HTML5 allow attackers to craft stealthier clickjacking attacks. With an innocent click, a user may unknowingly change privacy or security settings, initiate a financial transaction, enable a webcam or microphone, or send a malicious post or message to spread malware.
Securing HTML5 data
COD363
HTML5 Form Tampering:
Securing HTML5 forms
- You should use a form ID attribute only if it is specifically required for your code, and never allow user input to include JavaScript or HTML5 form elements.
- Consider using sandboxed iframes or shadow roots to isolate user-submitted content.
- You should also design your site to create a visual distinction between the site content and user-submitted content.
- Never rely on client-side validation for security. Consider it only a user convenience.
Securing HTML5 local data storage
SOP - Same Origin Policy.
Features:
-
Quotas to mitigate abuse of local storage space and bandwidth
-
Same-Origin Policy (SOP) to limit file access to a single domain,
-
a sandboxed file system to prevent access to arbitrary local files,
-
restricting files with executable extensions and not setting the execute bit (if applicable to the system).
-
Local storage security can be compromised by
- web browser bugs or poor implementations,
- man-in-the-middle (MitM) attacks
- Address Resolution Protocol (ARP) spoofing,
- Domain Name System (DNS) spoofing,
- cross-site scripting,
- cross-site request forgery (CSRF) attacks.
- by multiple separate applications on the same domain and local file modification.
-
If local storage is compromised to include malicious code, that code can persist in the user’s browser, and it can be difficult for the web application to detect it.
Best practices to mitigate local storage attacks include:
- Securing the entire application against possible cross-site scripting and cross-site request forgery attacks is vital.
- Use local storage only for public, non-confidential information, and never store sensitive, private, or authentication-related data in local storage.
- Use cookies with the HTTP Only and Secure flags set for session tokens; never use local storage.
- Treat all data retrieved from local storage as untrusted user input, and always validate it.
- Consider same-origin policies and use Transport Layer Security (TLS) as an additional layer of server authentication and Securing HTML5 Data.
- Don't use local storage with applications that store content from multiple users on the same domain.
- Isolate locally stored static content by using a unique domain separate from the main application.
- Regularly refresh stored content with trusted content from the server, and clear offline storage values that are no longer needed.
- Ask for user approval before storing data locally and consider providing a method for users to clear all local storage for your application.
Never store sensitive, private or authentication-related data in local storagge.
HTML5 History API
The History API works with session history and browsing contexts. Web developers previously had the ability to navigate the user through the browser history via JavaScript.
Now, with the History API, HTML5 allows manipulation of history entries, including the ability to add new entries and overwrite existing ones. Although the History API is limited by same-origin policy, if a site is vulnerable to cross-site scripting, an attacker could potentially perform a cross-site-request-forgery attack by inserting specific URLs into the history stack. Then, with just a click of the back button, the user may unknowingly perform an action on behalf of the attacker. There is also potential for embarrassing the user by placing URLs that the user hasn’t visited into their history.
HTML5 Geolocation API
The Geolocation API allows applications to query the user’s physical location. Although any website can approximate the user’s location based on the IP address, the Geolocation API provides precise location tracking if the user agent can provide it. For example, a mobile device that contains GPS hardware can be used for location tracking. The specification enables one-time access to the current location, continuous access to the current location, and access to previously cached locations.
HTML5 Drag and drop API
HTML5 elements can be turned into draggable elements by adding the draggable=“true” attribute. Text selections, images, and hyperlinks are all draggable by default. The greatest security concern with the Drag and Drop API is the potential for users to be tricked into performing actions on behalf of an attacker. Through clickjacking or simple persuasion, an attacker might be able to get a user to reveal private information in remote iframes or inject malicious content into the current page. Always set a specific acceptable Multipurpose Internet Mail Extensions (MIME) type for the copy, move, and link values of the dropzone attribute. Use the dragenter, dragover, and dragleave events to provide the user with clear, visual feedback of drag and drop operations. Never trust the origin of dropped data, and always validate it.
HTML5 Built-in security features
COD362
HTML5 Same-Origin Policy (SOP)
Same-Origin Policy sets security boundaries based on the protocol scheme, host name, and port of origin.
- Same origin:
- Different origin:
- http://example.com - uses http, instead of https.
- https://api.example.com - uses the 'api.' sub-domain
- https://example.com:800 - uses a different port.
One exception to same-origin policy (SOP) is when a page includes scripts from other origins, such as a content delivery network (CDN) or hosted library.
For example, to improve performance you might want to include, using the script tag, jQuery libraries from googleapis.com, CloudFlare, or directly from code.jquery.com.
Normally, SOP would prevent these scripts from interacting with the web page, which would not allow the scripts to perform any useful functions. For that reason, any script that you include directly from a web page will run within the same security context, as if it had come from the same origin, giving it full access to all content on the page. Because included scripts bypass all SOP restrictions, any script you include must come from a trusted and reliable source. On the other hand, scripts you include via iframes still have full isolation via SOP. These scripts cannot access anything on the originating page, although they do have full access to the content within the iframe.
Beware:
Although same-origin policy (SOP) is an important security mechanism, as we have seen with included scripts, there are some limitations. Inconsistent, limited, or buggy web browser implementations leave gaps in coverage, and the complexity and embedded nature of HTML5 content leaves much room for the future discovery of vulnerabilities. Furthermore, same-origin policies are ineffective against cross-site scripting (XSS) attacks originating from the same domain, and against Domain Name System (DNS) spoofing attacks. Finally, poor development practices can easily negate the benefits of same-origin policies.
HTML5 Content Security Policy (CSP)
COD362
CSP provides finer-grained control over web content from any origin, than SOP.
CSP, based on a least-privilege whitelist approach,
- helps detect and prevent cross-site scripting attacks
- mitigates clickjacking attacks
- helps to enforce HTTPS connections for all content.
CSP allows for setting origin policies for the
- XMLHttpRequest
- WebSocket
- EventSource APIs
- Fetch APIs;
setting iframe sandbox policies to allow or restrict form-submission and scripts, and to allow or restrict treatment of iframe content as coming from the same origin and loading of content from the containing page into an iframe.
CSP by default blocks the eval function, inline scripts, inline Cascading Style Sheets (CSS) styles, and data URIs, all of which are common cross-site scripting vectors.
You can configure it to report policy violations back to the server to identify potential attacks.
Because verbose whitelists can get complicated and therefore prone to errors, and the unsafe-inline setting can be too permissive, another option is to use a nonce or hash so that only inline scripts that include the nonce or hash will execute.
you can use the strict-dynamic setting so that trust from any scripts with a nonce or hash will propagate to any child scripts launched from them.
HTML5 Content security policy(CSP)
- Use Transport Layer Security (TLS) whenever possible to prevent tampering of CSP headers.
- Do not disable protections by using any of the allowsEval, allowsInlineScript, or allowsInlineStyle directives.
- Be as specific as possible with origins for each resource type; never use a single asterisk to match all values.
- Always provide a default source directive to ensure that all policies have a default setting if one is not specified; set the default source to none to start with the most restrictive default policy and add per-resource policies as needed.
- You should never include unsafe-inline or data: as origins because both allow cross-site scripting code to be included in the document itself.
- When using violation reports, use nonces to verify their authenticity.
Cross-Origin Resource Sharing (CORS)
Cross-Origin Resource Sharing (CORS) allows the browser to make HTTP requests to access resources across different domains, which would normally be blocked by same-origin policy.
- Check the Origin, Host, and Referer headers, but be aware that headers can be spoofed, so always implement further access controls, as necessary.
- For requests that might modify data, will include cookies, or have other significance, require a nonce field to protect from Cross-Site Request Forgery (CSRF) attacks.
- Never allow GET or OPTIONS requests to modify or otherwise affect data.
- Add the Access-Control-Allow-Origin headers on a per-resource basis, rather than for the entire domain.
- Only use Access-Control-Allow-Origin: * for publicly accessible static resources that do not include sensitive information or modify data; never use the wildcard (*) with internal or intranet sites.
- Finally, when limiting access to specific origins, use a whitelist for validation; do not just echo back the submitted origin in the response headers.
IFrame best practices
- You should use the sandbox attribute with iframes for untrusted content, even content that originates from your own site. Only enable the minimal number of required features for each iframe. Avoid using the seamless attribute for untrusted content, as it not only makes the frame look as if it were part of the containing document, but also largely treats content as if it were part of the parent document. Never use allow-scripts and allow-same-origin when the embedded document has the same origin as the parent document. Doing so allows the script to remove the sandbox attribute. Use the iframe srcdoc attribute to treat inline Hypertext Markup Language (HTML) as if it were untrusted iframe content. When serving content intended for use only in an iframe, use the text/html-sandboxed Multipurpose Internet Mail Extensions (MIME) type, which tells browsers not to render the content outside of an iframe. Note that sandboxed iframes disable scripts in embedded content by default. Many sites, however, use JavaScript to break out of frames to prevent clickjacking attacks. Because JavaScript is no longer the recommended solution for breaking out of frames, you should use the X- Frame-Options HTTP headers for content that should never appear in a frame.
scratchpad
fast overview
OWASP Top 10
- A1: 2017-Injection
- A2: 2017-Broken Authentication
- A3: 2017-Sensistive Data Exposure
- A4: 2017-XML External Entities(XXE)
- A5: 2017-Broken Access Control
- A6: 2017-Security Misconfiguration
- A7: 2017-Cross-Site Scripting(XSS)
- A8: 2017-Insecure Deserialization
- A9: 2017-Using Components with Known Vulnerabilities
- A10: 2017 Insufficient logging and Monitoring
CWE/SANS Top 25
PA-DSS 14
- Do not retain full track data card verification code or PIN block data
- Protect stored cardholder data
- Provice secure authentication features
- Log payment application acitivity
- Develop secure payment applications
- Protect wirelss transmissions
- Test payment applications to address vulnerabilities and maintain payment application updates
- Facilitate secure network implementation
- Do not allow cardholder data thttps://docs.microsoft.com/en-us/security/sdl/security-bug-bar-sampleo be stored on a server connected to the internt
- Facilitate secure remote access to payment application
- Encrypt sensitive traffic over public networks
- Encrypt all non-console administrative access
- Maintain a PA-DSS implementation guide for customers, resellers and integrators
- Assign PA-DSS responsibilities for peronnel and maintain programs for peronnel, customers, reellers and integrators.
CSA top 10 big security and privacy challenges
- Secure computations
- Secutiry best practices
- Secure data storage and transaction logs
- Endpoint input validataion/filtering
- Real-time secutiry monitoring
- Scalable and composable privacy-preserving data mining and analytics
- Cryptographically enforced data-centric security
- Granular access control
- Grnular audites
- Data provenance
What dis?
- Architecture, design, and threat modeling
- Authentication verification requirements
- Session management verification requirements
- Access control verification requirements
- Malicious input handling verification requirements
- Output encoding/escaping
- Cryptography at rest verification requirements
- Error handling and loggin verification requirements
- Data protection verification requirements
- Communication security verification requirements
- HTTP security configuration verification requirements
- Security configuration verification requirements
- Malicious controls verification requirements
- Internal security verification requirements
- Business logic verification requirements
- Files and resources verification requirements
- Mobile verification requirements
- Web services verification requirements
- Configuration
SAMM
- Governance
- Strategy and metrics
- Compliance and policy
- Training
- Intelligence
- Attack models
- Securrity features anddesign
- Standards and requirements
- SSDL Touchpoints
- Architecture analysis
- Code review
- Security testing
- Deployment
- Penetration testing
- Software environment
- configuration management
- Vulnerability management
Security best practices
-
Input validation
- Document the data flow
- keep it centralized
- keep it balanced (security vs user experience(e.g. dont reject input if you could handle it automatically, like removing extra spaces)
- Use a library
- order of validation
- cononicalization
- whitelist validation
- encoding
-
Least privilege
- what
- Reduces the attack surface
- Limits capability after a succesfull attack
- how
- Using limited-user account context
- removing write priv for web app users
- conf fw to only allow https (or HTTP)
- Setting file permissions that prevent modification of web content files
- methode
- Start with nothing;
- Segment your application; eaysily role base thingy
- Grant temporary priv and revoke upon completion
- Have stakeholder buy in
- what
-
Defense in depth
- E.g.
- Implement a web app firewall
- Implement web server and other platform protections
- Hardening the server's OS
- Properly validating all application input
- Setting database constraints to ensure proper data formats
- creating audit logs to track application operations
- E.g.
-
Cryptography
- In transit
- at rest
- integrity
- confidentiality
-
Install anti-virus and other security sw as appropriate
-
Consider using a hardening guid or tool appropriate for you app/OS
-
Ensure that the servers er physically secure
-
Disable or rename default accounts
-
establish strong password policies
-
keep app/system up to date.
-
Ensure the proper auditing and log file managment is in place
-
Proper file and directory access rights
-
Regularly audit the full system configuration
-
Use sw to perform regular vulnerability scans
-
Manage system conf settings with version control sw.
-
Deploy intrusion detection systems to identify any overlooked misconfigurations.
-
Monitor search engines??? id possible info leaks
-
Utilize log analysis or event management sw to identify unusual system activity
-
Logging for:
- Recording security incidents and policy violations
- Maintaining evidence for legal proceedings
- Gathering information on app errors.
- Detecting and alerting to possible intrusions
- Measuring application performance
- Maintaining an audit log for investigation and forensics
-
Ensure proper log content
- Sanitize output
- Invalid characters, neutralize executable content, markup, excessive length.
- Ensure integrity
- Least privilege
- RO archives
- Checksums and signatures
- transmit with encryption
- Maintain credibility (for legal purposes)
- Establish a process
- synchronize time
- correct time zones
- verify logging
- Analyze logs
- correlate
- use SEIM - Security Event and Incident Management (system/solution)
- Sanitize output
Cryptography
https://www.thesslstore.com/blog/difference-encryption-hashing-salting/
- Encoding - makes it easier to store, transmit or read binary data.
- Does not provide information security
- only provide textual representation of binary information. (e.g. base64?)
- Only proide trivial obfuscation.
- Encryption - protects the confidentiality of information.
- Does not detect availability loss, since no receipt is provided.
- Hashing - allows you to verify the integrity of information, that it is unaltered.
- Takes a variable-length input and generates a unique fixed-length output, called a hash value.
- Different hashing algorithms create different hash values.
- SHA-1 - 140 bits (Security Hashing Algorithm)
- SHA-224 - 224 bits
- SHA-256 - AKA SHA-2
- SHA-384
- SHA-512
- Digital signatures - creating using both hashing and encryption. Should be used to verify the authenticity of an information source.
- Authentic channel:
- Integrity of the message
- Authenticity of the sender
- Authenticity of the message
- Availabilty aspect
- send a reciept
There are three types of protected communication channel:
-
Authentic - Tamper resistant
-
Confidential - Resistant to disclosure
- HTTPS - only confidential, not secure.
-
Secure - Resistant to both tamper and disclosure.
-
Tampering is an attack against: integrity, authenticity or availabililty
Hashing
Salting a hash
https://www.thesslstore.com/blog/difference-encryption-hashing-salting/
Salting is a concept that typically pertains to password hashing. Essentially, it’s a unique value that can be added to the end of the password to create a different hash value.
The idea is that by adding a salt to the end of a password and then hashing it, you’ve essentially complicated the password cracking process.
- Say the password I want to salt looks like this:
- 7X57CKG72JVNSSS9
- Your salt is just the word SALT
- Before hashing, you add SALT to the end of the data. So, it would look like this:
- 7X57CKG72JVNSSS9SALT
Now, if a brute force attacker knows your salt, it’s essentially worthless. They can just add it to the end of every password variation they’re attempting and eventually find it. That why the salt for each password should be different – to protect against rainbow table attacks [h/t Jon’s suggestion in the comments].
Whole disk encryption
- Not encrypted when in sleep mode
- Keep paswords separate from the machine
- Keep key tokens away separat from machine.
- Still encrypt sensitive files on storage, so they can't be sent out of the machine and thus compromised.
Database encryption
-
HW level disk encryption
-
Whole-disk encryption
-
DBMS-based encryption
- Can leave data exposed to database queries
-
Server-side application-based encryption
- Can leave unencrypted information in the memmory
-
Client-side application-based encryption
- The server does not know the data is encrypted.
- Might create key distribution problems.
-
Disk or file encryption can leave backups and exported data unsecured.
Ciphers
-
Symmetric ciphers
- The secret key is used for both encryption and decryption.
- Block or stream.
- Good
- Faster
- Bad
- private key distribution.
-
Asymmetric ciphers
- Public/private keys
- Make is easy to have many people share information, each using their own
-
Ciphers
- Longer key
- Usually more secure
- takes longer encrypt/decrypt?
- With block ciphers you want you key length to be the same as the internal cipher block size.
- FIPS-140
- Two different ciphers(RC4,AES) with the same key length, will not produce the same strength encryption.
- Different ciphers have different performance characteristics.
- Some ciphers are designed for parallel processing.
- HW acceleration has a vast impact on performance.
- Longer key
-
Caesar ciphers - switching character x to the left or right in the alphabet.
Randomness
The rest is from DES203
randomness applies to any number of tasks, including generating keys and making two identical messages encrypted with identical keys result in two completely different encrypted messages or ciphertext. These two completely different ciphertexts decrypt to the same value or plaintext.
For cryptography to be effective, you need a source of random numbers to use as input to cryptographic processes. The numbers in a sequence of random numbers are uniform and independent. That is, they are statistically uniformly distributed, and the value of one number in the sequence has no dependence on its predecessor and no influence on its successor.
-
Entropy - is a measure of the disorder (randomness) in a system.
-
RNG - Random Number Generator.
-
PRNG - Pseudo Random Number Generator.
-
Hash - hash functions take arbitrarily-sized input and produce a shorter, fixed-length output. There are many types of hash functions used throughout computing, but, whenever we talk about hash functions in this course, we are referring to cryptographically secure hash functions.
-
Ciphers - encryption and decryption Encoders transform information from one representational format to another and are used by most
-
cryptographic systems to convert binary information into textual representations.
Certificates
-
Create self-signed x509 certificates with Subject Alternative Names
-
How to generate a self-signed SSL certificate for an IP address
Of certs and keys
-
TODO
- I'm guessing the server certificate must contain the address the client sees in the 'from' address
- so if the server responds through a proxy, then it would be the proxys address that should be embedded in the certificate.
- Where does the docker push/pull then lookup the CA? and how
- what about the kubectl is the CA looked up on the client or on the k8s server?
- I guess this is why the certificate can be generated on any machine
- I'm guessing the server certificate must contain the address the client sees in the 'from' address
-
x509 -
-
CA - Certificate Authority, which is a trusted entity that issues SSL certificates.
-
ca.crt - contains
- name of the server and subdomains
- the servers public key
- signature of the CA
-
certificate - A digital certificate certifies the ownership of a public key by the named subject of the certificate.
- Cert auth definition
- The format of these certificates is specified by the X.509 or EMV standard.
- 2.4 - pre-requisite: TLS certificates
- used as proof that the public key really belong to the site.
- contains
- public key embeded
- name of website and subdomain
- signature of CA
-
Client certificate - used by the server to verify that the client is a valide client.
- Usually client certificates a created in the background
-
crt - certificate
-
CSR - Certificate Signing Request.
-
PKI - Public Key Infrastructure(PKI)
-
SSL - Secure Socket Layer.
-
suffixeswhat-is-a-pem-file-and-
- .cert .cer .crt - A .pem (or rarely .der) formatted file with a different extension, one that is recognized by Windows Explorer as a certificate, which .pem is not.
- .crl - A certificate revocation list.
- Certificate Authorities produce these as a way to de-authorize certificates before expiration.
- You can sometimes download them from CA websites.
- .csr - This is a Certificate Signing Request.
- Some applications can generate these for submission to certificate-authorities.
- The actual format is PKCS10 which is defined in RFC 2986.
- It includes some/all of the key details of the requested certificate such as subject, organization, state, whatnot, as well as the public key of the certificate to get signed.
- These get signed by the CA and a certificate is returned.
- The returned certificate is the public certificate (which includes the public key but not the private key), which itself can be in a couple of formats.
- .key - This is a (usually) PEM formatted file containing just the private-key of a specific certificate and is merely a conventional name and not a standardized one.
- In Apache installs, this frequently resides in /etc/ssl/private.
- The rights on these files are very important, and some programs will refuse to load these certificates if they are set wrong.
- .pem - Privacy Enhanced Mail; this is a container format that may include just the public certificate, or may include an entire certificate chain including public key, private key, and root certificates.
- Defined in RFC 1422 (part of a series from 1421 through 1424) this is a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs),
- or may include an entire certificate chain including public key, private key, and root certificates. Confusingly, it may also encode a CSR (e.g. as used here) as the PKCS10 format can be translated into PEM.
- The name is from Privacy Enhanced Mail (PEM), a failed method for secure email but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys.
- Defined in RFC 1422 (part of a series from 1421 through 1424) this is a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs),
- .pkcs12 .pfx .p12 - This is a password-protected container format that contains both public and private certificate pairs.
- Originally defined by RSA in the Public-Key Cryptography Standards (abbreviated PKCS), the "12" variant was originally enhanced by Microsoft, and later submitted as RFC 7292.
- Unlike .pem files, this container is fully encrypted.
- Openssl can turn this into a .pem file with both public and private keys: openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes
-
TLS - Transport Layer Secruity.
-
TLS Certificates
-
openssl x509 -text -noout -in ~/.k8s_certs/minkube_isrock/ca.crt
- Dump certificate
-
openssl req -text -noout -verify -in ca.csr
- dump certificate Signing request
The Most Common OpenSSL Commands
- TLS Basics
- Certificate authority
- Key ceremony
- Public key certificate
- Public key infrastructure
- X.509
Generate certs and keys
-
Create the root certificate, ???
- openssl genrsa -out ca.key 2048
- openssl req -x509 -new -key ca.key -out ca.crt -days 10000
- -x509 - Output a x509 structure instead of a cert request
- I think this is the root certificate
-
Create the server certificate
- openssl genrsa -out server_cert.key 2048
- Generate the private key
- openssl req -new -key server_cert.key -out cert.csr
- Generate the certificate signing request
- does this also generate a public key?
- openssl x509 -req -in cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cert.crt -days 100
- This is the creation of the server certificate, this is being done by the CA
- openssl genrsa -out server_cert.key 2048
Flow
(Nan20, 2.4)
-
The process of getting a certificate
- create a key-pair
- generate Certificate Signing Request(CSR), with the public key inside
- Send the CSR to the Certificate Authority(CA)
- CA validates the information
- CA signs and issues a certificate to you
-
Client Connects to server
-
CA stuff?
- the server sends server certificate(signed with the root certificate)
- optionally the server certificate can also have IP addresses or server names in it
- the server sends server certificate(signed with the root certificate)
-
Symteric key stuff
- The server generates an asymetric key pair and sends the public part to the client
- The client generates a symetric key
- the client encrypts the symetric key, using the servers public key
- client sends the encrypted symetric key,
- The server will decrypt the symetric key package using the servers private key of the asymetric key pair
-
all communication between client and server will now be using the clients symetric key
What are certificates used for
2.4 - Pre-requisute: TLS certificates
- client: initiating the request
- server: responding
Generate certificates
- login to the haproxy host
- mkdir ha_proxy_certs
- cd ha_proxy_certs
- vi haproxy_cert.cnf
- openssl req -newkey rsa:2048 -keyout haproxy_cert.key -out haproxy_cert.csr -config haproxy_cert.cnf -nodes
- Generate the key and the csr
- run on the host
- openssl x509 -req -extensions v3_req -extfile haproxy_cert.cnf -in haproxy_cert.csr -CA ~/.minikube/certs/ca.pem -CAkey ~/.minikube/certs/ca-key.pem -CAcreateserial -out haproxy_cert.crt -days 100
- This is the creation of the server certificate, this is being done by the CA
- The
-extensions v3_req -extfile haproxy_cert.cnf
are required on the CA to copy the 'subjectAltName' from the .csr to the .crt
- cat haproxy_cert.crt haproxy_cert.key ~/.minikube/ca.crt > haproxy_cert.pem
- chmod 600 haproxy_cert.pem
- sudo cp haproxy_cert.pem /etc/ssl/certs/
[req]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = VX
stateOrProvinceName = N/A
localityName = N/A
organizationName = Self-signed certificate
commonName = admin
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.1.144
Create a root certificate
The root certificate also seem to contain a public key.
Steps:
- mkdir ~/ca_dir
- cd ~/ca_dir
openssl genrsa -out rootCAKey.pem 2048
- Generates a private key.
- genrsa - generate an RSA private key
- -out - output filename
- 2048 - how many bits
- why does it only generate a private key?
openssl genrsa -out rootCAKey.pem 2048
- show the public key.
openssl req -x509 -sha256 -new -nodes -key rootCAKey.pem -days 3650 -out rootCACert.pem
- req - used for creating and processing certificate requests in PKCS#10 format.
- It can also create self-signed certificates.
- -x509 - tells OpenSSL to create a self-signed certificate instead of a certificate request.
- -sha256 - widely used algorithm.
- -new - creates a new certificate request.
- -nodes - prevents the encryption of the output key.
- Without this option, you would be prompted to enter a passphrase for securing the private key.
- -key rootCAKey.pem - specifies the file to read the private key from.
- -days 3650 - sets the length of time for which the certificate is valid.
- -out rootCACert.pem - specifies the output filename.
- req - used for creating and processing certificate requests in PKCS#10 format.
openssl x509 -text -in rootCACert.pem
- look for 'CA:TRUE' to make sure it is a Certificate Authority
- x509 - displaying and managing X.509 certificate
- -in rootCACert.pem - specifies the input file
- -text - tells OpenSSL to output the certificate in text form
Create a certificate for a server on an IP address
These steps generate a certificate request for a server. The request then needs to be signed by a CA
- mkdir -p ~/req_cert_dir
- cd ~/req_cert_dir
openssl genrsa -out registryServerKey.pem 2048
- genrsa - generate an RSA private key
- -out - output filename
- 2048 - how many bits
openssl req -new -extensions v3_req -config registryServerCertReq.config -key registryServerKey.pem -out registryServerCertReq.csr -sha256
- req: This is a subcommand of OpenSSL used for creating and processing certificate requests in PKCS#10 format.
- -new: This option creates a new certificate request.
- The
-extensions v3_req -extfile haproxy_cert.cnf
are required on the CA to copy the 'subjectAltName' from the .csr to the .crt - -key registryServerKey.pem: This option specifies the file to read the private key from.
- -sha256: This option sets the hash algorithm used to sign the certificate request.
- -out registryServerCer.csr: The -out option specifies the output filename.
- -config registryServerCertReq.config: This option specifies the configuration file to use.
- This file can contain additional options and values to be used when creating the CSR.
openssl req -in registryServerCertReq.csr -noout -text
registryServerCertReq.config
[req]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = VX
stateOrProvinceName = N/A
localityName = N/A
organizationName = Self-signed certificate
commonName = admin
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.10.96
- distinguished_name = req_distinguished_name - This specifies the section containing the distinguished name fields to be used in the certificate.
- x509_extensions = v3_req - This specifies the section containing X.509 v3 extensions to be used in the certificate.
- prompt = no - This disables interactive prompting for the distinguished name fields
failure:
openssl req -new -extensions v3_req -config registryServerCertReq.config -key registryServerKey.pem -out registryServerCertReq.csr -sha256
Error: No objects specified in config file
Error making certificate request
Problem: distinguished_nae = req_distinguished_name
distinguished_nae should be distinguished_name
Sign server certificate with CA
-
mkdir -p ~/sign_cert_dir
-
cd ~/sign_cert_dir
-
vi v3.ext
-
openssl x509 -req -sha256 -in ~/req_cert_dir/registryServerCertReq.csr -CA ~/ca_dir/rootCACert.pem -CAkey ~/ca_dir/rootCAKey.pem -CAcreateserial -out registryServerCert.pem -days 365 -extensions v3_req -extfile v3.ext
-
openssl x509 -in registryServerCert.pem -text
output:
Certificate request self-signature ok
subject=C = VX, ST = N/A, L = N/A, O = Self-signed certificate, CN = admin
v3.ext
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.1.196
create a server certificate and sign it with the CA
- mkdir -p ~/req_cert_dir
- cd ~/req_cert_dir
- vi haproxy_cert.cnf
- openssl req -newkey rsa:2048 -keyout registryServerKey.key -out registryServerCert.csr -config registryServerCertReq.config -nodes
- Generate the key and the csr
- run on the host
- openssl x509 -req -extensions v3_req -extfile haproxy_cert.cnf -in haproxy_cert.csr -CA ~/.minikube/certs/ca.pem -CAkey ~/.minikube/certs/ca-key.pem -CAcreateserial -out haproxy_cert.crt -days 100
- This is the creation of the server certificate, this is being done by the CA
- The
-extensions v3_req -extfile haproxy_cert.cnf
are required on the CA to copy the 'subjectAltName' from the .csr to the .crt
DES203 on certificates
From DES203
Public key distribution presents a challenge because it requires a means of distributing public keys that provides a high degree of trust in the validity of the origin of the public key.
The two most common means of distributing verified public keys are out-of-band distribution of public key fingerprints, and digital certificates.
A public key fingerprint is a short sequence of characters used to authenticate a public key. It is usually a hash of the public key and associated identity information and is created by the key owner.
The fingerprint is distributed by out-of-band methods; that is, it is distributed by a method separate and distinct from the key distribution method.
Asymmetric key-based cryptographic solutions are often used to provide e-mail security. The sender of an e-mail uploads their public key to one of several widely recognized public key servers. They will then include a fingerprint of their public key in the e-mail’s signature, so anyone wanting to verify their public key has its fingerprint readily available.
If the recipient does not trust that fingerprint, they can use other means to verify it, such as a phone call or instant messaging (IM). These other means are considered out-of-band verification methods – they are separate from the distribution method of the key and are independent of the key itself.
A digital certificate is an electronic document used to prove ownership of a public and private key pair. The certificate is usually created by a trusted third party based on information submitted by the key owner.
The information contained within the digital certificate typically includes: the public key, the owner’s verified identity, the issuer’s identity, an expiration date, and an issuer-created digital signature that validates the integrity of the certificate and the authenticity of its issuer.
The distribution method is usually context-specific. For example, the main certificates of widely recognized issuers are often included with an operating system and/or browser.
-
Registrars are companies that verify identities and sign digital certificates.
-
Certificate Signing Request (CSR)
Process for generating a certificate:
- Generate private/public key pair
- Begin CSR process
- Provide public key
- Details such as og, country etc.
- For companys: always use a e-mail address, that is monitored and will not be removed. This is where you will receive experation date information.
Types of certificates:
-
Single server: tied to a single fully qualified domain name, such as
www.example.com
- NOT SSL.
-
Unified Communication(UC)/Subject Alternative Name(SAN)
- tied to multiple fully qualified domain names within a single domain name, such as
www.example.com
, www2.example.com, and secure.example.com. A registrar typically limits the number of names attached to a single certificate to between 20 and 30.
- tied to multiple fully qualified domain names within a single domain name, such as
-
Wildcard:
- tied to a base fully qualified domain name. For example, *.example.com would match all names that end in example.com.
-
EV : some sort of extention to single server
- a variation on single server, Unified Communications, and wildcard certificates,
- where "extended validation" indicates that the registrar has done extensive verification of the identity of the certificate owner and warrants that the certificate guarantees that you are accessing a resource (such as a website) under the control of that owner.
- a variation on single server, Unified Communications, and wildcard certificates,
-
Personal
- asserts the owner’s identity.
- There are many types and uses for personal certificates.
- The two most common ones are for e-mail signing and encrypting, and for smart cards such as employee badges, national identity cards, and credit cards.
-
Software-Signing
- used to digitally sign software. These are usually issued by operating system vendors, such as Apple and Microsoft.
-
Hardware Identity:
- installed on electronic devices by the manufacturer.
- They usually contain make, model, and serial number information.
- They may also contain other manufacturer-specific information, such as place and date of manufacture, and device-specific information, such as the device’s media access control (MAC) address.
- As these certificates are signed by the manufacturer, they are effective against counterfeiting.
-
Certificate signing:
- allows the holder to issue and sign certificates within a given scope.
- For example, if the scope on the signing certificate is example.com, example.net, example.org, and example.info, then the signing certificate can be used to create and sign certificates for any of these domains.
-
Root:
- a special signing certificate that allows its holder to sign certificates for any scope.
- Although anyone can create a root certificate, for the certificate to be considered valid, it must be installed on all systems that need to recognize it.
- Root certificates are installed by operating system and browser vendors, who rigorously validate and tightly control which root certificates they accept and install.
-
Self-Signed
- is signed by its creator; that is, it does not have a recognized root certificate as its signer.
- Self-signed certificates are useful in small organizations that want to avoid the expense and complexity of establishing an in-house facility to issue and manage unimportant certificates, such as those used for software testing or internal host identification.
-
PKI - Public Key Infrastructure.
- supports issuance, maintenance, and revocation of digital certificates.
- PKI has many components, including people, processes, and technology.
- has some significant security flaws. It is built on trust, and you can count on that trust being violated.
- Despite its flaws, there is no replacement for our current PKI.
- Security flaws
- User failure
- Software bugs
- Protocol attacks
- Compromise of root certificates
-
CA - Certificate authority.
- creates digital certificates
- maintains a database of all certificates it has issued along with the status of those certificates.
- The status is listed as either "good" or "revoked".
- The revoked status includes expired certificates, certificates revoked due to a revocation request, and any other status that would indicate an invalid certificate.
-
RA - Registration Authority.
- validates the identity of the certificate requester.
- requests certificates from the CA and distributes them to the requester.
- processes revocation requests
- informs the CA of validated requests.
Certificate revocation
- CRL - Certificate Revocation List
- OCSP - Online Certificate Status Protocol.
- OCSP recognizes three status values:
- "good" indicates that the certificate is believed to be valid;
- "revoked" indicates that the certificate is not valid;
- "unknown" indicates that the queried CA has no information regarding the certificate
- usually means that the wrong CA was queried for the certificate's status.
- OCSP recognizes three status values:
Message Integrity Cryptographic Functions
DES205
Key | Authenticity | Integrity | Non-Repudiation | |
---|---|---|---|---|
Hash | No key | x | ||
Message Authentication Code | Symmetric | x | x | |
Digital signature | Asymetric | x | x | x |
Software Development Process
Microsoft SDL
Microsoft SDL in Waterfall
- Training
- Core security training
- Requirements
- Establish security requirements
- Create quality gates/bug bars
- Security and privacy risk assessment
- Design
- Establish design requirements
- Perform attach surface analysis/reduction
- Threat modeling
- Implementation
- Use approved tools
- Deprecate unsafe functions
- Static analysis
- Verification
- Perform dynamic analysis
- Perform fuzz testing
- Attack surface review
- Release
- Create an incident response plan
- Conduct final secutiry review
- Release archive
- Response
- Execute incident response plan
Microsoft SDL-Agile
There are three categories:
- Every-Sprint Practices - Critical security practices that should be performed within every release
- Creating threat models
- performing static code analysis
- Updating threat models
- Fixing issues identified by code analysis tools
- Encoding all un-trusted web output.
- FSR
- all 'every-sprint' requirements have been completed.
- At least one requirement form each sub-bucket list has been completed
- No security bug that ranks higher than the designated sprint bug bar is open
- Bucket Practices - security practices that are completed on a regular basis and are normally spread across the project lifetime
- Creating bug bars
- Conducting attack surface reviews
- Only one requirement from each bucket per sprint
- Product teams decide what tasks to address
- No requirement can be ignored
- Verification task
- Interface Fuzzing
- file fuzz testing
- Attack surface analysis
- Binary analysis
- Design review
- Conduct a privacy review
- Review crypto design
- Assembly naming and APTCA
- User Account Control
- planning
- Create privacy support documents
- Update security response contacts
- Update network down plan
- Define/Update security bug bar
- One-Time Practices - security practices that are completed once at the start of an agile project
- Establishing security requirements
- Performing security and privacy risk assessments
- Identify privacy expert
- Identify security expert
- Using the latest compiler
Special areas of consideration include:
- Security education
- At least one security course per year
- Tooling and automation
- Threat modeling
- Fuzz testing
- Bug dense and "at-risk" code
- e.g. legacy code
- Exceptions
- Final security reviews
My notes: One-Time practices
- Core security training
- Establish security requirements
- Create bug bar
- Perform Security and privacy risk assessment
- Identify privacy expert
- Identify security expert
- Perform attach surface analysis/reduction
- Create Threat models
- Implement approved security tools
- Using the latest compiler
Attack surface analysis
See: https://owasp.org/www-project-cheat-sheets/cheatsheets/Attack_Surface_Analysis_Cheat_Sheet.html
Attack Surface Analysis is about mapping out what parts of a system need to be reviewed and tested for security vulnerabilities.
The point of Attack Surface Analysis is to understand the risk areas in an application, to make developers and security specialists aware of what parts of the application are open to attack, to find ways of minimizing this, and to notice when and how the Attack Surface changes and what this means from a risk perspective.
Attack Surface Analysis helps you to:
- identify what functions and what parts of the system you need to review/test for security vulnerabilities
- identify high risk areas of code that require defense-in-depth protection - what parts of the system that you need to defend
- identify when you have changed the attack surface and need to do some kind of threat assessment
The Attack Surface of an application is:
- the sum of all paths for data/commands into and out of the application
- the code that protects these paths (including resource connection and authentication, authorization, activity logging, data validation and encoding)
- all valuable data used in the application, including secrets and keys, intellectual property, critical business data, personal data and PII
- the code that protects these data (including encryption and checksums, access auditing, and data integrity and operational security controls).
You overlay this model with the different types of users - roles, privilege levels - that can access the system (whether authorized or not).
it is important to focus especially on the two extremes: unauthenticated, anonymous users and highly privileged admin users (e.g. database administrators, system administrators).
Group each type of attack point into buckets based on
- risk
- external-facing
- internal-facing
- purpose
- implementation
- design
- technology
Identifying and Mapping the Attack Surface
You can start building a baseline description of the Attack Surface in a picture and notes. Spend a few hours reviewing design and architecture documents from an attacker’s perspective. Read through the source code and identify different points of entry/exit:
- User interface (UI) forms and fields
- HTTP headers and cookies
- APIs
- Files
- Databases
- Other local storage
- Email or other kinds of messages
- Run-time arguments
- ...Your points of entry/exit
The total number of different attack points can easily add up into the thousands or more. To make this manageable, break the model into different types based on function, design and technology:
- Login/authentication entry points
- Admin interfaces
- Inquiries and search functions
- Data entry (CRUD) forms
- Business workflows
- Transactional interfaces/APIs
- Operational command and monitoring interfaces/APIs
- Interfaces with other applications/systems
- ...Your types
You also need to identify the valuable data (e.g. confidential, sensitive, regulated) in the application, by interviewing developers and users of the system, and again by reviewing the source code.
Creating threat models
- Create design documents
- Define and Evaluate your Assets
- Create an information flow diagram
- Define Trust Boundaries
- Identify Threat Agents
- Map Threat agents to application Entry points
- Define the Impact and Probability for each threat
- Rank Risks
Define the Impact and Probability for each threat:
DREAD DREAD, is about evaluating each existing vulnerability using a mathematical formula to retrieve the vulnerability’s corresponding risk. The DREAD formula is divided into 5 main categories:
-
Damage - how bad would an attack be?
-
Reproducibility - how easy it is to reproduce the attack?
-
Exploitability - how much work is it to launch the attack?
-
Affected users - how many people will be impacted?
-
Discoverability - how easy it is to discover the threat?
-
DREAD formula is:
- Risk Value = (Damage + Affected users) x (Reproducibility + Exploitability + Discoverability).
Rank Risks
Using risk matrix rank risks from most severe to least severe based on Means, Motive & Opportunity. Below is a sample risk matrix table, depending on your risk approach you can define different risk ranking matrix:
Risk Value: 01 to 12 → Risk Level: Notice Risk Value: 13 to 18 → Risk Level: Low Risk Value: 19 to 36 → Risk Level: Medium Risk Value: 37 to 54 → Risk Level: High
SW security vs network security
- SW security
- Propriaty protocols
- Network security
- FW
- intrusion detection system
Requirements stage
- ID security and compliance objectives
- Establish security standards
- Perform risk assessments
- Concider threat paths/ risk profile
- Look into legal requirements.
Project inception
- Identify secuty champions
- Indentify privacy champions
- establish Bug bar
Cost analysis
- risk assessment etc.
Establish security requirements
Legislation in the US
- Sarbanes Oxley(SOX): penalties for exposing or falsifying financial data.
- Gramm-Leach Bliley ACT(GLBA): Protect consumers' personal financial information held by financial institutions.
- General Data Protection Regulation(GDPR): personal data protection.
- Health Insureance Portability and Accountability Act(HIPAA)
- Health Information Technology for Economic and Clinical Halth Act(HITECH): requirements for sec and priv of health care info.
- Payment Card Industry Data Security Standard(PCI-DSS)
- Payment Aplication Data Security Standard(PA-DSS)
Ask the questions:
- What specific assets need to be protected?
- What are your compliance requirements?
- What are your quality-of-service requirements?
- What data is considered confidential?
Architecture risk analysis and remediation
Vulnerability focus vs. strategic risk-management approach
Vulnerability focus:
- Typically just automated vulnerability scanning.
- Often looks for a pre-determined set of common vulnerabilities, but fails to address unique vulnerabilities.
- Does not systematically prioritize security efforts.
- Does not provide strategic insight to improve your overall application secutiry posture.
Risk Management approach:
- Provides a holistic view of all your applications and their risk exsposure.
- Categorizes and prioritizes the applications you need to protect.
- Helps focus your security budget, resources and activities on highter risk items first, most bang for the buck.
- Ultimately reduces costs by guiding you to the most effective, well-targetd security activities and solutions.
The four phases of risk management_
- Asset identification
- List all the assets that the organization want to protect.
- For app security that means listing all the apps that are deployed, both 1st party and 3rd party apps.
- Other assets could include: Financial asssets, reputation, it infrastructure etc.
- Data to collect:
- Application name, vendor, description/purpose, internal owner and contact information, data originally implmented, last major change or update, current version, number of users
- Thecnical specification - OS, code base size and location, development languages, deployed environment and dependencies(such as other apps and middleware)
- business areas and functions supported(e.g. web service, CRM, Enterprise-wide resource management, financial transactions)
- Data processed or stored - e.g. proprietary company info, payroll info, creadit card info(PCI), PPII, HIPAA, other info protected by low, other customer sensitive data, other business sensitive data.
- Interfaces and users - record the interfaces and types of users to which the application is exposes such as: internet-facing, intranet-facing, customer-facing, partner-facing, vendor-facing, internal/employee-facing.
- Application risk
- Risk rank, when it has been assigned.
- Risk analysis
- Indentify and prioritize high-risk applications on which to conduct in-depth security assessments and risk analysis.
- Two important factors in risk analyzis:
- Impact - What are the consequences of if it happened.
- Probability - How likely is this to occur.
- Risk score = Impact x Probability
- Risk calculation
- Impact/probability matrix: Probability on the X-axis, Impact on Y-axis. Low/Medium/High on both axis.
- Red(first): High/High, High/Medium
- Yellow(second): High/Low, Medium/Medium
- Green(third): Low/Low, Medium/Low.
- Impact/probability matrix: Probability on the X-axis, Impact on Y-axis. Low/Medium/High on both axis.
- Well-formed risk statements:
- Impact - What is the impact to the business
- Asset - What are you trying to protect
- Personal info, PCI, Authentication credentials, classified data, patented data, business-ciritcal data ...
- Threat - What are you afraid of happening?
- Negative financial impact, loss of operations, reulatory or legal action, danger of life or health, violation of internal policies or principles, damage to reputation
- Asset - What are you trying to protect
- Probability - How likely is the threat given the controls?
- Vulnerability - How could the threat occur?
- Interfaces and users (see 'asset identification' for list)
- Architecture
- Is native code executed on the client device
- Where will the application be deplyed
- Does the application implement any kind of authentication or authorization, if yoes provide additional details.
- Is this application a plug-in or extension for another applications, if yes, what apps will be using this plug-in
- Is there connectivity with other applications?
- Mititgation - What is currently reducing the risk?
- Vulnerability - How could the threat occur?
- Impact - What is the impact to the business
- Rank the risks.
- Tier1: High
- Highly sensitive data with regulatory compliance requirements
- Long life span
- Internet facing
- Business critical functionality
- Tier 2: Medium
- Medium sensitivity data with no compliance requirements
- Medium-to-long lifespan
- Intranet facing
- Business important functionality
- Tier 3: Low
- Low sensitivity data
- Short lifespan
- No authentication or authorization required
- Low importance functionality
- Tier1: High
- Look at the policies for each risk tier, below.
- Risk mitigation
- Reduce risk -
- Avoid risk -
- Transfer risk - Transfer to a third party, e.g. letting your cloud provider handle the nework DoS attack floods.
- Accept risk -
- Risk monitoring
- Monitor the solutions over time.
- Ensure that changing conditions are taken into account.
- Ensure the risk remains manageable.
- Ensure that the chosne solution is still the best one.
- When you monitor risk, Choose an easy metric. Numeric values work best.
Policies for risk tiers
- High
- Security champion
- Full security training curriculum
- Security design and coding standards
- Threat model
- Design review
- Code review(automated and manual analuysis)
- Privacy review
- Deployment review
- Medium
- Security awareness training
- Security design and coding standards
- Threat model
- Code review(automated scanning)
- Penetration test(automated scanning)
- Deployment review(as appropriate)
- Low
- Penetration test(automated scanning)
Risk rank
- Gather inventory informaiton about each application
- Define data criticality
- Measure application attack exposure
- Prioritize your resources
TCO
Total cost of ownership(TCO):
- Patch management
- downtime due to breach
- Exposure through sw backdoors
- loss of trust(by customers)
- Potential liabilities related to regulations.
Perimeter threats:
- Sources:
- Blackbox attacks (script kidies)
- Researches
- type
- worms, and other network attacks
- defenses
- firewalls and intrusion detection systems
- Investigate the logs
- Track intruders when they do breach the perimiter.
Internal threats:
- Sources:
- users
- clueless/careless insider
- Social engineering
- Virusses through infected laoptios
- Stoel equipment and the loss of stored sensitive data
- malicious insider
- theft
- sabotage
- espionage
- clueless/careless insider
- malicious code
- infected laptop brought to work
- browsing malicius sites within ompany
- activating e-mail attachments.
- compromised vendors
- users
Understanding risks: Processes:
- High password complexity may result in individuals relying on writing passwords on sticky notes to remember them.
- Laptops may container chaced sensitive information
- What forensic logs and monitoring are in place(are they being used)
- Is there a centralized administration model(does any individual have full access to everything?)
- Who watches the watchers?
Risk remidiation
- Inter-agent secure communication
- Access control
- encryption
- Identity management
- User administration
- privision/deprovision accounts
- workflow automation
- Delegated administration
- Password synchonization
- self-service password reset
- Access control
- Policy-based access control
- Enterprise/legacy single sign on(SSO)
- Web SSO
- Reduced sign on[amount of times you have to login?]
- Directory services
- Identity repository
- Metadata replication/synchronization
- User administration
- Data-in-Transit security
- Privacy - protect against information exposure
- Integrity - protect against modification
- TLS, IPSec
High availability and recovery techniques
-
Continuous availability
- No downtime
- Requires a great deal of redundancy
- Can compare results of redundant calculations to ensure integrity of results
- Much more difficult to scale; new systems must be built for new projects
- Very costly to build and maintain
-
High availability
- Minimal downtime
- Clustering used to prevent problems caused by hw and sw failures
- clustering is a method of distributed computing where a host can run the same services frm many different nodes or machines
- if one node fials, another can pick up where it failed
- Monitoring these clusters can show when one is going to fail, allowing the system to switch to a new cluster and avoid downtime
- Ususually scalable for rapidly growing systems
- Requires high initial expenditure.
-
Recovery
- databackups and archives
- Disk mirrors
- Basic recovery tools
-
Host lockdown
- bastille
- SBScan
- Titan - hardening tool for Solaris, works with other Unix-based systems
-
Security management and administration
- Technologies applicable
- Security patch analysis tools
- Update tools
- Principles involved
- Auditing and logging
- Change control
- Configuration
- Technologies applicable
-
Secure remote admin
- Technologies applicable
- VPN
- Firewalls
- IDS
- Cryptography
- Principles involved
- Input validation
- Auditing and logging
- Secure the weakest lin
- Defense in depth
- Technologies applicable
-
Inter-agent secure communication
- Technologies applicable
- Cryptography
- Access control
- Principles involved
- Least privilege compartemntalization
- Input validation
- Fail secure
- No security by obscurity
- Technologies applicable
-
Identity management
- Technologies applicable
- Authentication methods
- Biometrics
- Secure devices or tokens
- Access control
- Principles involved
- Tunable security levels
- Compartementalization
- Auditing and logging
- Technologies applicable
-
Data-in-transit security
- Technologies applicable
- Cryptography and PKI
- VPN
- Authentication methods
- Principles involved
- Don't reinvent the wheel
- Backwards compatibility issues
- No security by obscurity
- Technologies applicable
-
HA and Recovery thecniques
- Technologies applicable
- Security patch analysis tools
- Update tools
- Principles involved
- Fail secure
- Auditing and logging
- Secure by default
- Technologies applicable
-
Host lockdown
- Technologies applicable
- Hardening and lockdown tools
- Firewalls
- IDS
- Intrusion response systems
- Virus thrttling
- Principles involved
- Secure by default
- Least exposure
- Defense in depth
- Secure the weakest lin
- Policy compliance
- Technologies applicable
Design Stage
-
Design review - with a security advisor for any portion of the project that requires one.
-
Relying Party Suite(RPS) SDK
-
Use and follow User Account Control(UAC) best practices.
-
SDL Firewall rules and requirements
-
If the project has a Privacy Impact rating of P1, identify a compliant design based
-
SDL Security recommendations:
- Functional and design specifications
- Security architecture document
- Default attack surface/least priviliges[reduce attack surfaces]
- Secure default installation
- Defense-in-depth
- Remove Outdated functionality
- Security review all sample source code
- Migration of unmanaged code to managed code [bringing legacy code up to standard]
- Keep up with security issues in the industry
- Educate developers on unsafe functions and coding patterns
- Be careful with error messages
- Ensure appropriate loggin enabled for forensics
- For online services: perform page flow integrity checking
- Integration-points security design review
- Use strong log-out and session management.
-
SDL Privacy recomendataions:
- P1
- Complete detailed privacy analysis in the SDL privacy questionaire
- Hold a design review with your privacy subject matter expert.
- P2
- compliant design based on concepts, scenarios and rules in the Microsoft Privacy Guidelines for developing products and services
- Complete detailed privacy analysis in the SDL privacy questionaire
- Use FxCop to enforce design guidelines in managed code
- P1
-
Threat models - what threats can affect your software. Classify threats and prioritize vulnerabilities.
- Attacker centric - anticipate what an attacker might do
- Software centric - identify potential attacks agains each element of the software design
- Asset centric - examine the assets managed by an application(seneitive information, intellectual property, etc)
- Complete threat models for all functionality identified during the cost analysis phase
- Hold a design review with your privacy subject matter expert if your privacy subject matter expert has requested one, you want confirmation that design is compliant or to request and exception.
-
Application security principles:
- Attack surface reduction - turn off services that you don't use.
- Secure defaults - not leave unsecure defaults, but configure to most secure settings.
- e.g. Use encrypted instead of un-encrypted communication.
- Least privilege
- all sw can and will be compromised
- Apps should be designed using the minimal set of privileges required to function correctly.
- If higher privs are needed, then elevate at the point and and return down as soon as it is no longer needed.
- thus limit the impact of a malicious user compromising the application
- Defense in depth
- layering a series of defenses to form a more comprehensive defense posture.
- If one layer fails the others are still there.
- Compatemetalization
- trust boundaries to isolate internal components from one another
- To access different data/areas re-authentication or ? cna be required
- breach of one component does not provide access to all other components.
- Policy compliance
-
Ensure that data are backed up as needed. And ensure that the data can be restored in a timely manner.
Threat models
-
Complete threat models for all functionality identified during the cost analysis phase
-
Ensure all threat models meet minimal threat model quality requirements
-
All threat model and reference mitigations must be reviewed and approved by at least one developer, one test and one program manager.
-
Confirm that threat model data and assoiciated ddocumentation is stored using a document control system.
-
The person managing threat modeling effors should complete training beforehand.
-
Any design change request(DCRs) should trigger an assement of the changes to help ensure
- new threats or vulnerabilities are not introduced
- existing mitigations are not degraded.
-
Create an individual work item for each vulnerability list in threat models
-
Follow MS NEAT security user experience to improve security warnings
- https://www.microsoft.com/security/blog/2012/10/09/necessary-explained-actionable-and-tested-neat-cards/
- https://www.sadev.co.za/content/security-hard-users-so-let-us-clean-neat-spruce
- N: Necessary – Only show messages that you need. If you can take a safe action automatically or defer the message, do that!
- E: Explained – If you do interrupt the user, explain in everything to the user. EVERYTHING?! Yes, and the SPRUCE acronym will help explain what verything is.
- A: Actionable – A message should only be presented to the user if there is steps the user can take to make the right decision.
- T: Tested – A security message needs to be tested. TDD, Usability Testing, Visual Inspection, every test.
- SPRUCE:
- S: Source – Why are we showing this message? Did a website do something or a file or a user action? Tell the user.
- P: Process – Give the user the steps they need to go through to make sure they make the right decision.
- R: Risk – Explain what the consequences of getting the decision wrong.
- U: Unique – If your software knows everything, do the right thing automatically.
- So if you are showing the message, it means the user has unique information that is needed to make the decision.
- Explain what information is needed (slightly similar to P).
- C: Choices – Show the user all the options and recommend the safer one.
- E: Evidence – Provide any additional information that the user may need to make the decision.
-
Threat : unwelcome event og person/system from which an attack can originate.
-
Threat model: The process of decomposing a systems architecture and identifying key structural elements and system assets which are valuable resources to be protected
- system entry and out points
- data and control flows
- security mechanisms
- and potential attackers to highlight applicable risks and associated attacks against the system.
-
Bug: An implementation-level software problem that may exist incode but will never be executed; relatively easy to discover and to remedy.
- Flaw: An architectural or design-level problem that can result in serious security issues and that can be much more expensive to fix than implementation-level errors.
Common benefits of threat modeling:
- Think beyond canned attacks
- Identify Top-N lists, attackers and Doomsday Scenarios.
- Doomsday scenrario express extreme situations that could threaten your organization or even cause it to go out of business.
- You should build your software to account for these situations and to avoid or mitigate them.
- Identify where threat agents exist relative to the architecture.
- including insiders.
- Identify components that need additional protection.
- Highlight assets, risks and flaws in your system's design.
- Determine which components are likely to be targeted by attackers and how they will be attacked.
- Put additional security in or remove the functionality.
- Determine whether business or security objectives can be met.
OWASP: Open web application security project.
- Risk = Probability x Impact
- Attack tree: Top-down approach for decomposing risks into detailed attacks to visualize the set of all possible scenarios enabling a given risk to be realized.
- Attack Vector: The path through the system that an attacker will use to carry out an attack.
Input to Threat Modeling
- Architecture diagrams
- Deployment/configuration guides
- Source code
- Penetration test reports
- Users guides
- Interviews; with
- Busines analyst
- Tech proj lead
- Architect
- Lead developer
- Application component developer
- Build engineer
- Requirements engineer
- QA engineer
- Product support engineer
- Network specialist
- Requirements specifications
- use cases
- Actually using the running system
Step in threat modeling
- Learn as much as possible about the target of analysis
- Discuss security issues surrounding the software.
- Determine the likelihood of compromise.
- Perform impact analysis.
- Rank risks
- Develop a mitigation strategy.
- Report findings (This looks like the tool used for rating sw issues.)
Not a threat modeling methodology:
- OCTAVE: Operationally Critical Threat, Asset and Vulnerability Evaluation.
- CVSS: Common Vulnerability Scoring System. Standardized risk rating system.
- STRIDE: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privilege. A risk categorization scheme used at Microsoft.
- DREAD: Damage potential, Reproducibility, Exploitability, Affected users, Discoverability. Risk rating scheme by Microsoft.
Attack resistance analysis
Asses the systems ability to withstand known types of attacks: 1 Identify general categories of risks. 2 Map Attack patterns to each of the identified risks using attack or vulberability checklists. 3 Identify architecture elements that could be affected by these attacks. 4 Determine if the controls placed around the identified elements are sufficient to thwart the corresponding attacks.
- The Seven Pernicious Kingdoms
- Input Validation and Representation
- API Abuse
- Security Features
- Time and State
- Error Handling
- Code Quality
- Encapsulation
- Environment
- http://www.datamation.com/secu/article.php/11076_3686291_6/Secure-Programming-the-Seven-Pernicious-Kingdoms.htm
- The 24 deadly sins of software security
- Web application sins 1 SQL injection 2 Web server-related vulnerabilities(XSS, XSRF, response splitting) 3 Web client-related vulnerabilities(XSS) 4 Use of Magic URLs, Predictable Cookies and hidden form fields.
- Implementation sins 5 buffer overruns 6 Format string problems 7 Integer overflows 8 C++ catastrophes 9 Catching exceptions 10 command injection 11 Failure to handle errors correctly 12 Information leakage 13 Race conditions 14 poor usability 15 not updating easily 16 executing code with too much privilege 17 failure to protect stored data 18 The sins of mobile code
- Cryptographic sins 19 use of weak password-based systems 20 Weak random numbers 21 Using cryptography incorrectly
- Networking sins 22 Failing to protect network traffic 23 Improper use of PKI, especially SSL 24 Trusting network name resolution
- OWASP top 10
- A1-Injection
- A2-Broken authentication and session management
- A3- XSS
- A4 - Insecre direct object references.
- A5-Secrity misconfiguration
- A6-Sensitive data exposure
- A7-Missing function level access control
- A9-Cross-site request forgery(CSRF)
- A10-Unvalidated redirects and forwards
- https://www.owasp.org/index.php/Top_10_2013-Top_10
- https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet
- Sans top 25
Underlying framework weakness analysis
- Identify the underlying sw components used.
- Then ask:
- Are there known vuls in the component version, being used?
- Are there any security controls being provided by the framework that are insuffucuent for our system?
- Is the component secured by default or must it be configured?
- Can the component be configured to be secure or must some other control be in place to use it securely?
Ambiguity analysis
- trust modeling
- Trust zones
- Data sensitivity modeling
- Threat agent modeling
When to perform threat modeling
- Requirements and use cases:
- Abuse cases
- Security requirements
- Risk analysis
- Architecture and design
- Risk analysis
- secure architecture
- Test plans
- Risk-based security tests
- Code
- Code review(tools)
- Test and test results
- Risk analysis
- Penetration testing
- feedback from the field
- Penetration testing
- Security operations
Actors of the threat modeling process
- Business stakeholder
- Business goals
- use cases
- reqs
- Software architect: Informin about SW spec, arc and low-level design
- Spec
- High lvl arch
- Low-lvl design
- Security architect
- Spec
- High lvl arch
- Low-lvl design
- Threat modeling/ARA
- Sec std
- Sec arc
- Attack patterns
- Vuln code
- Dev. Lead
- Low-lvl design
- Code
- QA. Lead
- Running system
- Sec analyst
- Threat modeling/ARA
- Attack patterns
- Vuln code
- Known vuln
- Review code
- Pen testing
- Code
- Running system
Fundamentals of threat modeling
Eng206 0800
- Identify and evaluate application threats and vulnerabilities.
Focus on the overall approach
-
- Scope the model
- use scenarios
- also define what is out of scope
- Use exsiting documentations
- e.g. start modeling on a whiteboard
- Iterative approach: Start small then add to the model as you gain more knowledge
- also do what-if
- When you make engineering decissions then revisit the model
- Adapt the acticities for you application
- Have information:
- protocols
- netowrk set-up
- etc
- Identify security objetives
- Application overview
- Decompose your application
- Identify threats
- Idenitfy Vulnerabilities
Your document should contain:
- Security objectives
- Key scenarios
- Protected resources
- Threat list
- Vulnerability list
Test should happen against vulnerabilities
Identify Security Objectives
-
Confidentiality - Data can only be accessed by those with auth
- Encryption
- Authentication
- Authorization
-
Integrity - The gurantee that the data is in its original state, without modification
- Checksoms and hashes
- Digital signatures
- Encrypted transport
-
Availability - users always able to access service
- Server uptime
- Backups
- DoS resistance
-
What do you not want to happen.
-
What to protect
-
Compliance requirements
-
Quality requirements
-
Protect intangible assets
e.g.
- Prevent attacks from obtain data
- Meet service-level agreements
- Protect the company's credibility
Create Application overview
- Fundamentals of threat modeling
- Draw the end-to-end deployment scenario
- Identify roles
- Identify key usage scenarios
- Identify technologies
- Identify application security mechanisms
- Draw the end-to-end deployment scenario
- Draw servers
- connections
- protocols
- Application stacks
- Clearly mark what is internet, intranet, single server etc
- list ports
- Authentications being used
- Identify
- Application roles, duties and functions
- Identify who can do what within your application
- What can users do
- What privileged groups and roles exist?
- Who can perform sensitive functions?
- What is suppose to happen
- What is not suppose to happen
- Identify key usage scenarios
- What are the important features of you application? What does it do?
- Identify the applications main functionality and usage
- With a primary focus on the create, read, Update and Delete functionality
- Also look at several scenarios happening simultanious
- Identify which scenarios are out of scope
- Identify technologies
- List the technologies and key features of the sw and platforms that you use
- Operating systems
- Web server software
- Application frameworks
- Database server software
- Development languages
- List the technologies and key features of the sw and platforms that you use
- Identify application security mechanisms
- Identify any key informaiton that you know about your appliation's security mechanisms
- Input and data validation
- Authentication
- Authorization
- Configuration management
- Session management
- Cryptography
- Sensitive data handling
- Parameter manipulation
- Exception management
- Auditing and blocking
- Identify any key informaiton that you know about your appliation's security mechanisms
Decompose your application
- Trust boundaries
- indicate where trust level change
- e.g. elevation required
- Start by defining the app outer boundaries
- Identify access control points or key places where access requires additional privileges or role membership
- Identify boundaries from a dataflow perspective
- For each subsystem, Consider whether you trust the upstream data flow or use case
- If not, consider how flow/input might be validated, authenticated and autorized.
- Example of boundaries
- Web server and db server
- firewall
- What other apps have access to the database?
- indicate where trust level change
- data flows
- Trace the appse data as is flows throught the app
- Start at the highest level
- What data is used where
- Trace the appse data as is flows throught the app
- Entry points
- Entry points are also attach points
- App entry points
- Internal entry points
- exit points
- Priotities the exit points where your application writes data that includes client input or includes data from untrusted sources such as shared databases
Identify threats
- Conduct informed brainstorm activities, with dev, tests, architects and security profesionals and system administrators
- Identify common threats, e.g.:
- Server
- path traversal
- Denial of Service
- SQL injection
- XSS
- User
- Malware
- Brute force
- Spam
- Then idenitfy how this could apply to your application
- Or questionary with goal driven
- Server vulnerable to identity spoofing
- Data vlunerable to tampering
- Is sensitive information exposed in error messages?
- Examin the app layer by layer, tier by tier and feature by feature
- Server
- Threats along use cases
- Examine each use case and how someone tcoulde abuse it
- How can a client inject malicious input here
- is data written based on unvalidated input
- How could an attacker manipulate session data
- How could an attacker obtain sensitive data
- How could an attacker bypass authorization checks
- How does data flow from the front end to the back end of your application
- Which component call which componnet
- What does valid data look like
- How si the data constrained
- How is data validated against expected length, range, format and type
- Where is validation performed
- What sensitive data is passed between components and across networks
- How is that secured while in transit
- Examine each use case and how someone tcoulde abuse it
- Identify common threats, e.g.:
Idenitfy Vulnerabilities
- Consider the app layer by layer
- prefereably use existing model or checklist
- STRIDE - Microsoft
- Focuses most the threat end
- also useful in enumerating vulnerabilities
- CAPEC(Common Attack Pattern Enumeration and Classification)
- Comprehensive matrix of attacks
- useful for idnetifying vulnerabilities as well as mitigating and detecting vulnerabilities in production applications
- Dictionary of attack patterns
- ATT&CK
- Enumerate dozens of attacks under twelve major categories
- includes models for different server models
- STRIDE - Microsoft
Implementation stage
- Security
- Ensure it is secure in the default installation
- Provide information on security changes when users change the configration of the app.
- Identify information
- Establish a plan for document user facing security documentation.
- Gather comments and feedback concerning challenges users faced when securing prior versions.
- Make information about secure configurations available
- Use secure methods to access database stores
- No EXEC in stored procedures
- No XML entity resolution
- Use safe integer arithmetic
- Reflection and relay attacks
- Coding part:
- Minimal Standard Annotation Language(SAL) code annotations
- Use HeapSetInformation
- Adopt appropriate codeing techniques and methodologies.
- Review and adopt recomended development tools
- Define, document and communicate best practices and policies
- Identify any long-lived pointer in application code
- Fix code flagged by /W4 compiler warnings
- Do not use global exception handlers
- Restrict database permissions
- NULL out freed memory pointer in new code
- Defend against "ClickJacking" attacks
- Use TLS encryption securely
- Privacy
- Create deployment guides on how to protct user privacy using privacy controls or features.
- Create content to train users
- Best practices
- Watch out for hard dependencies
- Minimum code generation suites and libraries
- SDL required compiler and linker flags
- Use code analysis tools
- Fix issues identified by code analysis tools for managed code.
- Do not use "ad-hoc" SQL queries.
- Enable Address Space Layout Randomization(ASLR) for unmanaged or native code.
Code analysis
- Static
- Pro:
- Easily scales to review large amounts of code
- Objective and unbiased
- Mature technology
- Easier to debug identified issues
- Cons
- Language specific
- Produces false positives and false negatives
- Detects implementation flaws only
- Cannot detect business logic flaws.
- Pro:
- Binary
- Pro:
- Easily scales to review large amounts of code
- More objective and unbiased than human reviers
- More accurate than static analysis tools
- Cons
- Less mature
- Difficult to fix bugs because developers must interpret compiled code
- Might require multiple tools
- Produces false positives and false negatives
- Detects implementation flaws only
- Pro:
- Dynamic
Code review
-
Ensure Cached secrets cannot be disclosed.
-
Do not include private data/information in code.
- passwords
- oauth tokens
- ...
-
CWE-78 - command injections
- Invoke commands
-
CWE-94 - code injections
- Validate all input
- do not use 'eval()'
-
CWE-89 - SQL injection.
-
Weak permissions
-
Canonicalization - convert the path to 'true' path
-
Privilege Escalation
- Inappropriately stored credentials
- Information Disclosure
- Information leakage
- hard-coded credentials
- Weak access crontrols
- Weak filesystem permissions
- Remote command execution
- Command injection
-
Denial of service in scripts
- Not cleaning up in the filesystem, e.g. after errror handling
- Deleting data leading to DoS
- rm -rf "$tmpfilenameprefix"* (where the var is empty)
- Allowing attacker to delete data leading to DoS
- Resource exhaustion
- Excessive resource usage
- Disk exhaustion - script uses up all the disk space preventing other processes from writing data.
- Memory Exhaustion - can cause other processes to crash.
- CPU Exhaustion - take up all the CPU time, which usually significatnly degrades performance for the entire system.
- Socket exhastion - all network sockets are open, no nnew connections can be made.
- Bandwidth Exhaustion
- Downloading large files
- lots of log info going out?
- File descriptor exhaustion -
-
RegEx DoS
- E.g.
- validate the length of data before passing it to RegEx
- Avoid grouping with repition
- ([0-9]+)+
Fault injection
- Compile time
- Run time - e.g. fuzzing.
Security misconfiguraiton
- Unpatches security flaws
- Leaving default, backup or other unused files in app
- Retaining default accounts and default passwords
- Enabling lenient file and directory permissions
- Leaving unnecessary and unused services enabled
- Keeping administrativ aor dev features accessible to anyone
- Revealing techinical or infrastructure details in error messages.
Verification (stage)
- Security and privacy testing
- How well are Confidentiality, Integrity and Availability maintained, with regards to both the software itself and the data processed?
- Identify code vulnerabilities
- Security requirements are:
- File fuzzing on code where input that crosses a trust boundary is processed by parsing code.
- Remote Procedure Call(RPC) endpoints must be tested for security and other issues with an RPC fuzzing tool
- Define and document the security bug bar for the product.
- Final security review
- Perform Application verfier tests
- Perform network fuzzing tests
- Perform binary analysis with BinScope tool if obfuscated binaries are shipped
- Perform penetration testing against the application
- Develop and use vulnerability regression tests
- For online services and LOB application, conduct flow testing
- Ensure that authentication cannot be bypassed by directly connecting to backend resources
- For online services and LOB applications, conduct replay testing
- For online services and LOB applications, conduct input validation testing on scenarios and variants
- Performsecure code review.
- Privacy
- For P1 and P2 include privacy testing in your master test plan.
- Security push(improve security)
- Accomplish
- Uncover changes in code with security implications that might have occurred during development
- Improve security in legacy code
- Identify and remediate any remaining vulnerabilities.
- Find security vulnerabilites, not fix them
- Address any security issues found in security push after security push is done.
- Secure software cannot be developed with a security push alone
- Review and update Threat models
- all bugs that affect security against security bug bars
- Review and update privacy questionaire form
- Recommendations
- Security code reviews
- Identify development and testing owners
- Include and prioritize all sample code
- Re-avaluate the attack surface of the software
- Consider important componennt code reviews
- Review the security documentation plan
- Focus the entire project team on the push
- When evaluating the importance or severity reating of code:
- Critical code
- Important code
- Modrate code
- When prepping for a push
- Allocate time and resource into schedule
- Allocate resources appropriately
- Communicate security push information and resources
- Well-defined criteria to determine completeness
- What can be done in eralier stages to keep down the time spent in the push
- Keep threat models up to date
- Perform penetration testing based on threat models.
- Accuratly track and document attack surfaces and any changes to them
- Complete security code reviews for high severity code
- Identify and document development and testing contacts
- Ensure all legacy code meets current security standards
- Validate the security documentation plan
- Accomplish
Deployment phase
-
Attack surface reduction
-
Secure defaults
-
Publice release privacy testing
- Review and update the Privacy companion form
- Complete the privacy disclosure
- Create atlking points as suggested by the privacy advisor
- Review deployment guidance for enterprise programs and conduct a legal review of the deployment guide
- Create "quick tesxt" for support teams to address anticipated user questions.
-
Planning
- How to handle zero-day exploits.
- Provde contact information for security incident response personnel
- Creating and documenting a sustainable model to respond to creating immediate security patches outside sevice packs
- Developing a policy for security response for components that are released outside the regular product release schedule.
- Disabling tracking and debugging in applications prior to deployment.
For Privacy
- Id responcible personnel and add contact information
- Idnetify additional developmen and QA project team resources
- Be prepared for following the SDL privacy escalation responce framework if an incident occurs
-
Final security review(FSR)
- Incident responce team
- Key contact information
- Privacy escalation framework
- Provide all required information
- Security advisor signs off as completed or provides list of required changes
- Scrre of B or above must be attained
- Reperat privacy review efforts for any open issues identified
- Privacy advisor signs off as completed or provides list of required changes
-
Pre-release
- Incident response plan
- Key contact information
- Privacy escalation framework
Least priviledges in deployment
- Deploy applications using the minimal privileges needed for the application to function.
e.g. using new user instead of root, to run service.
Example of OS hardening:
- Only install what is necessary for your purpose.
- strictly limit user accounts and disable or rename default accounts.
- Establish strong password policies for the OS and all installed applications.
- Use a packet filter or firewall to restrict access and isolate the machine on the network
- Keep the system up-to-date with the latest operating system, web server, database and other SW patches.
- Set file and directory permissions to the least necessary to run the required applications.
- Review OS settings that can improve system security.
- Ensure that proper system auditing and log file management is in place.
- avoid installing software development and debugging tools on the server.
- Install anti-virus and other security software as appropriate
- Consider using a hardening guide or tool appropriate for your operating system.
- Ensure that the server is physically secure.
Example of improving the security of web servers:
- Install only modules/services necessary for your application
- Use appropriate file and dir perm.
- Disable dir browsing
- Review server settings that can improve platform security
- Remove default, demo, back, temporary and other directories not appropriate for a production server.
- Remove, renam or restrict ip addr access to administrative dirs.
- Disable or reconfigure error reporting so useres never see detailed error messages
- Disable or block HTTP methods not needed for your app.
- Modify server headers to not reveal server platform nor version.
- Review script interpreter and app framework settings to ensure that proper limits and security settings are in place.
- Consider using a hardening guide or tool approriate for your web server and app framework.
- Ensure that the server is physically secure.
Example of improving the security of database servers
- Remove ir disable unnecessary database features or services
- Strictly limit user accounts and disable or rename default accounts
- Use a packet filter or firewall to tightly restrict access to database ports
- Remove any demo, testing, tranining and all other databases not necessary for the web application
- Carefully configure user roles and permissions to strictly limit access for web application accounts. Never use DBA, root or system accounts for general db access.
- Disable stored procedures that are not required for the appllication
- Consider using a hardening guide or tool approriate for your web server and app framework.
- Ensure that the server is physically secure.
Incident response plan
- Incident response team members
- Including emergency contact information
- Roles and responsibilities
- Communication policies
- Process for validating incidents and developing patches
- Provisons to ensure compliance
Security patching process
- Platform app update procedres and anticipated deltays
- Document the process for deploying security paches and updates on each service. include anticipated delays
- Non-technical procedure
- How to contact customers
- How to handle extended delays during the patch approval process.
- App security bug bar
- Define what severity must go into the field when.
- High severity - now, Low severity - later
- Third-party code and services used by applications
- Document all third-party code, libraries and services used by the app.
- Document procedures for deploying patches for those components.
- Alternative patch delivery methods
- In case traditional delivery methods cannot be used.
- E.g. Highly critical patches that needs to go out right now.
- Escalation paths
- Document support and escalation paths, including contact info and procedures for escalation.
- Availability of on-call support resources
- Document the expected availability of on-call support resources required for each type of pathc, based on severity.
Response
Microsoft SDL Optimization Model
- LOB - Line of Bunsiness
- TAM - Threat Analysis and Modeling Tool
Microsoft SDL Optimization Model
Capability areas
- Training, policy and Organization Capabilities
- Requirements and Design
- Implementation
- Verification
- Release and Response
Maturity
-
Basic
-
Standardized
-
Advanced
-
Dynamic
-
Identify
-
Self-assess
-
Implement
LOB (8.34-)
- Secure design
- Authentication
- Authorization
- Asset handling
- Auditing
- Secure comm
- Secure coding
- Integer overflow/underflow
- Input validation and handling
- Regulatory compliance
- SOX
- HIPAA
- GLBA
- PCI
App Risk level
- High
- Design review
- Pen test
- Code review
- Priv review
- Threat model
- Deployment review
- Medium
- Code review
- Priv review
- Threat model
- Deployment review
- Low
- Threat model
- Deployment review
Bug Bar
- Critical
- Impact accross the enterprise and just the local LOB application/resources
- Exploitable vulnerability in deploymed production application
- Important
- Exploitable security issue
- Policy or standards violation
- Affects local application or resources only
- Risk rating = High Risk
- Moderate
- Difficult ot exploit
- Non-exploitable du to other mitigation
- Risk rating = Medium risk
- Low
- Bad Prctice
- Non-exploitable
- Not directly exploitable, but my aid in other attacks
- Risk rating = minimal risk
Post
-
regular host-level verification of patch mgmt
-
Compliance
-
Network scanning
-
host scanning
-
Responding to hot-fix
-
service pack releases and more
-
Training
- Core security training
-
Requirements
- Establish security req
- Create Quality gates/Bug bars
- Security and privacy risk assessment
-
Design
- Establish design requirements
- Analyze attack surface
- Threat modeling
-
Implementation
- Use approved tools
- Deprecate unsafe functions
- Static analysis
-
Verification
- Dynamic analysis
- Fuzz testing
- Attack Surface review
-
Release
- Incident reponse plan
- Final security review
- Release archive
-
Response
- Execute incident response plan
Implementing the MS SDL Threat modeling tool
SDL model
LOB - Line of Business TAM - Threat Analysis and Modeling Tool
Microsoft SDL Optimization Model
Capability areas
- Training, policy and Organization Capabilities
- Requirements and Design
- Implementation
- Verification
- Release and Response
Maturity
-
Basic
-
Standardized
-
Advanced
-
Dynamic
-
Identify
-
Self-assess
-
Implement
LOB (8.34-)
- Secure design
- Authentication
- Authorization
- Asset handling
- Auditing
- Secure comm
- Secure coding
- Integer overflow/underflow
- Input validation and handling
- Regulatory compliance
- SOX
- HIPAA
- GLBA
- PCI
App Risk level
- High
- Design review
- Pen test
- Code review
- Priv review
- Threat model
- Deployment review
- Medium
- Code review
- Priv review
- Threat model
- Deployment review
- Low
- Threat model
- Deployment review
Bug Bar
- Critical
- Impact accross the enterprise and just the local LOB application/resources
- Exploitable vulnerability in deploymed production application
- Important
- Exploitable security issue
- Policy or standards violation
- Affects local application or resources only
- Risk rating = High Risk
- Moderate
- Difficult ot exploit
- Non-exploitable du to other mitigation
- Risk rating = Medium risk
- Low
- Bad Practice
- Non-exploitable
- Not directly exploitable, but my aid in other attacks
- Risk rating = minimal risk
Post
-
regular host-level verification of patch mgmt
-
Compliance
-
Network scanning
-
host scanning
-
Responding to hot-fix
-
service pack releases and more
-
Training
- Core security training
-
Requirements
- Establish security req
- Create Quality gates/Bug bars
- Security and privacy risk assessment
-
Design
- Establish design requirements
- Analyze attack surface
- Threat modeling
-
Implementation
- Use approved tools
- Deprecate unsafe functions
- Static analysis
-
Verification
- Dynamic analysis
- Fuzz testing
- Attack Surface review
-
Release
- Incident reponse plan
- Final security review
- Release archive
-
Response
- Execute incident response plan
How to create application security design requirements
eng211
- ASM - application security maturity (ASM)
Tools and Technology Investments:
- Version control systems
- Source code scanning tools
- Defect management control systems
- Test automation systems
- Web application vulnerability scanning
- Application-layer security mitigation, e.g., web application firewall
People and Process Investments:
- Secure SDLC activities for development teams
- Staff training (technical & awareness)
- Internal "Red Teams"
- Third-party security reviews (at code and as-built layers)
- Application security audit procedures for all application types
- Integration with IRM, compliance, and governance initiatives
A successful approach to security engineering involves:
- Identifying security objectives
- Understanding the security objectives for your application early helps guide threat modeling, code design and development, code reviews, and testing.
- Knowing the threats to your application
- Using an iterative approach
- Some activities should be performed multiple times during the development process to maximize application security.
Key Security Activities:
- Identify security objectives
- Apply security design guidelines
- Conduct security architecture and design reviews
- Create threat models
- Perform security code reviews and penetration tests
- Conduct security deployment reviews
Security Goals and Objectives
- Availibity
- Integrity
- Confidentiality
Use your security goals to:
- Filter the set of applicable design guidelines
- Guide threat modeling
- Scope and guide architecture and design reviews
- Help set code review objectives
- Guide security test planning and execution
- Guide deployment reviews
Questions to ask:
- Tangible Assets to Protect
- Are there user accounts and passwords to protect?
- Is there confidential user information (such as credit card numbers) that needs to be protected?
- Is there sensitive intellectual property that needs to be protected?
- Can this system be used as a conduit to access other corporate assets that need to be protected?
- Intagible assets to protect
- Are there corporate values that could be compromised by an attack on this system?
- Is there potential for an attack that may be embarrassing, although not otherwise damaging?
- Compliance requirements
- Are there corporate security policies that must be adhered to?
- Is there security legislation you must comply with?
- Is there privacy legislation you must comply with?
- Are there standards you must adhere to?
- Are there constraints forced upon you by your deployment environment?
- Quality of service requirements
- Are there specific availability requirements you must meet?
- Are there specific performance requirements you must meet?
Example : Matrix approach
| Roles/Assets | User creation | Permission modification | Asset creation | Asset removal | Asset read | | Admin | x | x | | | | | Content Creator | | | x | x | x | | Reader | | | | | x | | Anonymous | | | | | x |
- Original requirement: Administrator must be able to add users and change user permissions through an administrator interface.
- C – Protect user names and permissions from disclosure to a non-administrator.
- I – Restrict a non-administrator from adding users and changing user permissions.
- A – Application is available so that the administrator is able to add users and change user permissions.
design guidelines
Guidlines must be:
- Actionable - Associated with a vulnerability that can be mitigated through the guideline
- Relevant - Associated with a vulnerability that is known to affect real applications
- Impactful - Represents key engineering decisions that will have wide-ranging impact; design mistakes can have a cascading impact on your development life cycle
Potential problems due to bad design Best practices
- Input/Data validation
- e.g: Insertion of malicious strings in UI or public APIs
- Results can range from information disclosure to elevation of privilege and arbitrary code execution
- Best practices:
- Do not trust input;
- consider centralized input validation.
- Do not rely on client-side validation.
- Be careful with canonicalization issues.
- Constrain, reject, and sanitize input.
- Validate for type, length, format, and range.
- Authentication
- e.g: Identity spoofing, password cracking, elevation of privileges, and unauthorized access.
- Best practices:
- Use strong passwords.
- Support password expiration periods and account disablement.
- Do not store credentials (use one-way hashes with salt).
- Encrypt communication channels to protect authentication tokens.
- Authorization
- e.g: Access to confidential or restricted data, data tampering, and execution of unauthorized operations.
- Best practices:
- Use least-privileged accounts.
- Consider authorization granularity.
- Enforce separation of privileges.
- Restrict user access to system-level resources.
- Configuration management
- e.g: Unauthorized access to administration interfaces, ability to update configuration data, and unauthorized access to user accounts and account profiles.
- Best practices:
- Use least-privileged process and service accounts.
- Do not store credentials in clear text.
- Use strong authentication and authorization on administration interfaces.
- Secure the communication channel for remote administration.
- Sensitive data
- e.g: Confidential information disclosure and data tampering.
- Best practices:
- Avoid storing secrets.
- Encrypt sensitive data over the wire.
- Secure the communication channel.
- Provide strong access controls for sensitive data stores.
- Cryptography
- e.g: Access to confidential data or account credentials, or both.
- Best practices:
- Do not develop your own.
- Use proven and tested platform features.
- Keep unencrypted data close to the algorithm.
- Use the right algorithm and key size.
- Avoid key management (use DPAPI).
- Cycle your keys periodically.
- Exception management
- e.g: Denial of service and disclosure of sensitive system-level details.
- Best practices:
- Use structured exception handling.
- Do not reveal sensitive application implementation details.
- Do not log private data such as passwords.
- Consider a centralized exception-management framework.
- Auditing and logging
- e.g: Failure to spot the signs of intrusion, inability to prove a user's actions, and difficulties in problem diagnosis.
- Best practices:
- Identify malicious behavior.
- Know what good traffic looks like.
- Audit and log activity through all of the application tiers.
- Secure access to log files.
- Back up and regularly analyze log files
Architecture and design reviews
The goals of performing an architecture and design review are to analyze application architecture and design from a security perspective, and to expose high-risk design decisions that have been made.
Organize your reviews by common application vulnerability categories and look at the areas in which mistakes are most often made and can have the most security impact.
For each vulnerability category, keep best practices in mind. Guidelines and best practices will help you discover gaps in your design or areas where mistakes are being made.
Each vulnerability category should have its own set of potential problems that you can check against. These checklists represent areas where mistakes are most often made. A checklist driven approach will ensure consistent high-quality reviews over time and will give you confidence you are achieving a high degree of coverage. As you perform your reviews, the checklists can grow based on additional issues you find and knowledge you acquire along the way.
The goals of performing an architecture and design review are to:
- Analyze application architecture and design from a security perspective.
- Expose high-risk design decisions that have been made.
- All entry points and trust boundaries are identified by the design.
- Input validation is applied whenever input is received from outside the current trust boundary.
- The design assumes that user input is malicious.
- Centralized input validation is used where appropriate.
- The input validation strategy that the application adopted is modular and consistent.
- The validation approach is to constrain, reject, and then sanitize input. Looking for known, valid, and safe input is much easier than looking for known malicious or dangerous input.
- Data is validated for type, length, format, and range.
- The design addresses potential canonicalization issues.
- Input file names and file paths are avoided where possible.
- The design addresses potential SQL injection issues.
- The design addresses potential cross-site scripting issues.
- The design does not rely on client-side validation.
- The design applies defense in depth to the input validation strategy by providing input validation across tiers.
create a list of questions about the security of each of the following three application components:
- Deployment and infrastructure
- review the design of your application in relation to the target deployment environment and the associated security policies.
- Host, Network
- review the design of your application in relation to the target deployment environment and the associated security policies.
- Security frame
- review the critical areas in your application, such as authentication, authorization, input/data validation, exception management, and other areas.
- Input validation, Authentication, ...
- review the critical areas in your application, such as authentication, authorization, input/data validation, exception management, and other areas.
- Layer-by-layer
- Finally, walk through the logical tiers of your application, and evaluate security choices within your presentation, business, and data access layers.
- Presentation, Business, data
- Finally, walk through the logical tiers of your application, and evaluate security choices within your presentation, business, and data access layers.
Security testing fundamentals
tst101
Threat model
Three main approaches to threat modelling:
- Attack-centric - anticipate what an attacker might do, and derive risks based on that perspective
- Software-centric - identify potential attacks against each element of the software design.
- Service
- Asset-centric - examine the assets managed by an application, such as sensitive information, or intellectual property, and derive risks based on potential threats to those assets.
- Sensisitve data
Tools
- Vulnerability scans
- API
- Preprogrammed to known SW
- No findings isn't the same as your application is free of any issues
- Penetration testing - you attack an application to identify and exploit its vulnerabilities.
- Penetration tests are performed by actual security experts who use both custom and off-the-shelf tools and techniques.
- Unlike vulnerability scanners, penetration testers can adapt to custom protocols and business logic.
- Static analysis tools
- Pros
- Cons
- Detect implementation flaws only
- Code review
- Pros
- Cons
- Time consuming
- Fateque
- Different reviews, different areas of expertice.
Fundamentals of secure architecture
DES101
Please visit the NIST homepage for the most current versions of the NIST 800-53 controls referenced in this course https://csrc.nist.gov
Produce design specifications that:
- Support organization’s security architecture
- Describe security functionality
- Use security controls
- Explains security functions, mechanisms, and services
- Formal policy
- The SA-17(1) Formal Policy Model is a relevant NIST 800-53 control, which requires developers to produce a security development process that integrates a formal policy model that describes the organizational security policy.
- Describes security-defined elements of the organizational security policy
- Proves the formal policy model enforces these elements
Security-relevant components are any hardware, software, or firmware vulnerable to malicious attacks. (I asume this is what the threat model covers?)
TLS - Top Level Specification
The purpose of formal correspondence is to:
- Produce a formal top-level specification
- Demonstrate consistency with the formal policy model
- Describe security-relevant components
- Explain why security-relevant components not addressed
The purpose of Informal correspondence is to:
- Implement exceptions, error messages, and effects
- Demonstrate the informal top-level specification is consistent with the formal policy model
- Describe security relevant components in top-level specification
- Informally demonstrate or explain why security-relevant components are not addressed
SA-17(5) Conceptually Simple Design as the relevant NIST 800-53 control Conceptually Simple Design
- Conceptual design created by company
- Design using simple protection mechanisms
- Internal structure with regard for these mechanisms
Testing
Testing should be structured based on the organization’s testing criteria. This criterion will use test cases, reports, both static and automated testing that includes testing tools for the application, system, and network resources and hardware. SA-17(6) Structure for Testing is a relevant NIST 800-53 control.
- Penetration Testing – Network and Hardware: use Wireshark.
- SAST – Application testing: using Fortify static analyzer.
- DAST – Application testing: There is Checkmarx for this.
- UAT testing: the company should use Black Duck
How to structure testing...
- Structure testing based on security-relevant components
- Facilitate testing using Agile testing methodology
Least Privilege is all about controlling access of network access and the resources of the organization. It is making sure that only people who need access have that access and SA-17(7) Structure for Least Privilege is a relevant NIST 800-53 control.
There should be a Structure for Least Privilege or a "Least Privilege" policy or Acceptable Use Policy (AUP) in place and should be used when assigning permissions.”
Use the Principle of Least Privilege (POLP). It states that every program and every user of the system should operate using the least set of privileges necessary to complete the job.
SA-17(7): Structure for Least Privilege Least Privilege requires organizations to:
- Implement control over network access and resources
- Establish the Acceptance User Policy (AUP) with employees
Secure cloud instances
DES216
The Cloud Security Alliance (CSA) recently conducted research to identify the top threats to cloud to computing.
These threats include:
- Data Breaches
- Insufficient Identity, Credential and Access Management
- Insecure Interfaces and APIs
- System Vulnerabilities
- Account Hijacking
- Malicious Insiders
- Advanced Persistent Threats
- Data Loss
- Insufficient Due Diligence
- Abuse and Nefarious Use of Cloud Services
- Denial of Service
- Shared Technology Issues
Let’s take a look now at the web-based threats and vulnerabilities most relevant to AWS Application Developers. The threats include:
- Sensitive Data Exposure
- Broken Authentication
- Insufficient Identity and Access Management
- Account Hijacking
- Reliance on Untrusted Components
- Advanced Persistent Threats
- Abuse and Nefarious Use of Cloud Services
- Denial of Service Attacks
- Shared Technology Issues
Top threats to user privacy include:
- Leaking sensitive information stored in the web browser
- Allowing others to intercept unencrypted web traffic
- Exposing sensitive data, such as passwords, through minor security breaches
- Revealing user identities or other personal information
What factors should we consider when storing data? Strong encryption is a fundamental and vital part of protecting sensitive data. In terms of storing data, there are two key considerations—using symmetric encryption algorithms such as AES to securely store data, and requiring a specific key—or password—to retrieve the data; and using iterative one-way hashing algorithms such as PBKDF2 or bcrypt to store hashes used to verify user passwords.
Strong encryption is a fundamental and vital part of protecting sensitive data. In terms of storing data, there are two key considerations:
Key Management: Best Practices
When using symmetric encryption, follow best practices for protecting the encryption keys:
- Do not hard-code encryption keys in your application
- Plan file system permissions carefully to protect files that contain encryption keys
- Store encryption keys outside of the web content directories
- Build the application to support periodic key changes and establish a regular schedule for changing keys
- Do not include encryption keys in backups
Protecting Sensitive Data in Transit
TLS
web - Preventing Data Leakage
- Encrypt Communications Channels
- Always send user credentials and session tokens over secure encrypted channels, even for private internal communications
- Implement HTTP Strict Transport Security (HSTS) at the Web server to instruct compliant browsers to only communicate over Transport Layer Security (TLS) to your application
- Do Not Disclose Information Via URLs
- Although content sent via Transport Layer Security (TLS) is encrypted, the URL is not
- Never reveal sensitive information via URLs
- Use Trusted Certificates
- For private intranet applications, always use certificates signed by an organization certificate authority
- For public applications, always use certificates signed by a recognized and trusted certificate authority for public applications
- Never use self-signed certificates, not even for internal applications
Risks of Weak Authentication
- Weak authentication is a gateway to system control.
- Consequences of weak authentication can include:
- Unauthorized modification of device settings
- Disruption of service
- Access to critical system controls
- Elevation of Privilege vulnerabilities may result in an attacker gaining unauthorized access to data or programs
Require Multiple Levels of Authentication To avoid the risks of weak authentication, follow these best practices:
- Require multiple authenticators
- Define roles: Create anonymous, normal, privileged, and administrative areas and identify which ones require a proven user identity
- Perform security checks on both the client and server: Ensure that any authentication checks performed on the client side are duplicated on the server side
- Protect communication channels: Ensure that all communication channels are protected
Sources of Untrusted Inputs
All input is untrustworthy and has the potential to cause harm. You should be skeptical of every
interface, including GUI, API, file system, network, and other third-party software. If you accept input, do not process it until it is validated. Check for the following when validating input:
- Look for valid input instead of guessing what bad data may look like - use a whitelist instead of a blacklist
- You can improve system performance by moving the primary responsibility for strict input validation to the trust boundary
Input can come from many untrusted sources, such as:
- Command-line parameters
- User files
- User input
- Data from external components or systems
- Databases
- Network components
- Libraries or services
- Third-party data repositories
Internal Directory and File Paths
- Applications often store files and other resources in specific locations in a file system. Unless absolutely necessary, these file paths should never be exposed.
User Names of Other Users
- Application user names should never be exposed unless required. Doing so gives malicious users valid account names that they can use to compromise the application.
Abuse and Nefarious Use of Cloud Computing Highly available cloud resources are now commonplace. The conveniences these services provide to users include ease of access, relative anonymity, and robust sharing capabilities. However, these same features also result in vulnerabilities that are very attractive to hackers. Common attack scenarios can include:
- Botnets
- Distributed denial of service (DDoS) attacks
- Downloadable malware
- Hijacking of network traffic
- Theft of credentials
- Spam messages
- Automated click fraud
- Hosting malicious content
To mitigate these threats:
- Ensure that rigorous registration requirements are in place
- Monitor network communications for potential exploits
- Implement strong authentication
Insecure Application Programming Interfaces (APIs)
Application programming interfaces (APIs) from cloud service providers may have security vulnerabilities including:
- Easily exploited API keys used by web and cloud services to identify third-party applications
- Anonymous access, or reusable tokens or passwords
- Allowing clear-text authentication or transmission of content
- Rigid access controls that cannot be easily customized
- Difficult-to-monitor logs and exception reports, which reduce the likelihood that problems will be detected before they become exploits
Shared Technology Issues
The cloud computing infrastructure is shared among many users. Hackers constantly seek to penetrate and exploit systems within this shared framework.
This calls for a “defense-in-depth” approach to security that relies on both you and your Cloud Service
Provider (CSP) playing active roles with clearly defined responsibilities. In many service models, the CSP is assigned primary responsibility for the outer layers of security, which often include the following:
- Data Center Security (Protecting physical locations of cloud servers)
- Network Security (Perimeter firewalls, intrusion detection systems, and TLS)
- Infrastructure Security (RAID, dedicated routers, and backup) However, keep in mind that you, and not the CSP, are ultimately responsible for ensuring that your organization’s security requirements are met.
Securing network access
DES214
- access requester - is the device soliciting access to one or more resources on the network. This is typically a user's computer, but it can be any device attempting to access network resources.
- Policy Enforcement Point - is the component within the network that enforces the network access controls. This can be a programmable switch, a router, a firewall, or any device in the network that allows or denies access to resources.
- Policy Decision Point - is the component tasked with deciding what access is to be granted to the access requester based on some information.
Access control
DES217
- Information access control
- Logical access control - enforce the restrictions on the resources that users are permitted to access.
- Separation of duties
- It is good to have duties separated so that separate entities are assigned to carry out, approve, and monitor an action.
- For example, a common scenario for separation of duties involves financial transactions.
- To reduce the risk of financial fraud, two people may be required to approve transactions of certain sizes or types.
- Organizations also frequently employ maker/checker processes to ensure that developers who write code have their code reviewed by another person before it goes live. This helps reduce errors and helps to catch potentially malicious behavior.
- It is good to have duties separated so that separate entities are assigned to carry out, approve, and monitor an action.
- Access control to program source code libraries
- if proprietary business rules are revealed by a disclosure of source libraries, this can have an adverse impact on business and can provide an advantage to competitors.
- Access to all such source libraries should be restricted based on a need-to-know and need-to-access basis.
- Authorized development team members must access source libraries through the version control systems.
- If developers change teams, permissions should be updated according to the new privilege needs.
- A properly implemented change control mechanism ensures that all change requests are initiated by relevant business personnel and approved by the appropriate authorities before any change is made in the application. Once the business requirement for change is processed and approved, the application change process must also follow a logical and secure path.
- All changes should be made in the development environment by authorized development team members using the version control system. All access and changes to code made must be audited and monitored.
- Unique IDs for tasks
- E.g single use credit cared.
- A unique, task-based ID lessens the likelihood that a particular user account will be attacked because the attacker cannot track a specific user.
- Must be random, so it can't be predicted.
- Sensitive system isolation
- Common examples of such critical systems are database servers, centralized log servers, or encryption key escrow servers.
- Isolation of sensitive systems must be physical, as well as digital.
- Operationl procedures
- specify particular actions that implement policies, standards, and guidelines.
- Procedures specify what actions people must take as part of the overall strategy for the entire organization.
- Change control
- Change control procedures specify precisely what must be done at any time when a change is to be made to an environment.
- Different environments have different needs, and therefore, different change control procedures.
- Effective change control relies on key concepts like the maker/checker principle, separation of duties, and separation of environments.
- Incident response
- Incident response procedures must provide detailed instructions for what the incident response team should do for each type of incident that may occur.
- This includes who to notify, when to notify them, and what information may be shared.
- Four stages:
- dentify
- In the identification stage the procedures must specify the steps required to identify the particulars of the incident and how to document them.
- The procedures must also specify where that information should go, and how it should be transmitted.
- contain
- Once the incident has been sufficiently identified, the procedures need to specify what must be done to contain the damage.
- During containment, any system backups of sensitive data must be performed. This ensures that any critical data has been saved before the potential evidence has been wiped or eradicated.
- eradicate
- This stage happens only after the procedures for identification and containment have been met.
- recover
- In this stage the procedures must specify how the environment can be properly restored to the working environment that existed before the incident occurred.
- dentify
- Daily operations
- specify what actions should be taken as part of normal daily operations. These include changing backup media, handling of restore requests, and various other tasks such as creating new accounts and resetting passwords.
- Formal allocation control procedures
- Authorization approvers
- Review of users access rights
- ID removal procedure
Module
Can't bolt on, Touch point: where security can add
1 Code review 2 Architecture risk analysis - Identifies and prioritizes security-related deisgn flaws inside of the software architecture. 3 Penetration testing (Only proves the presence of problems, not their absence) 4 Risk-based security testing - white box security testing. 5 Abuse cases - define system behavior that must be prevented. 6 Security operations - secure deployd software from hostile environement in which it operates.
-
REQ: Security req + Abuse cases
-
Design: Architecture reisk analysis
-
Implementation: Code review
-
Verification: Risk bases security tests + Penetration testing
-
Maintenance: Security operations.
-
User stories: Abuse cases
-
SW arch: Arch risk analysis(ARA)
-
acceptance testing and review: risk based security tests
-
refactor: code review
SSI: Software Security Intelligence
- Attack models: capture information used to think like an attacker.
- Security features and design: include the creation of usable security patterns for major security controls, and the building of middleware frameworks of them.
- Standards and requirements: involve drawing out explicit security requirements form the organization.
SSG Provides:
- Secure frameworks
- secure libraries
SSI Training topics:
- Secure coding
- security architecture
- Threat modeling
- Security testing
Arch risk analysis(ARA)
relies on threat modeling
- findes design flaws other miss
- Drives other activities
- requires advanced expertise
- time-consuming
Manual code review
- Reveals precise location and caus of issues
- provides insight into coding practices and programmer awareness
- time-consuming
Static Application Security Testing(SAST) / Static analysis
-
Can be performed early on incomplete code base
-
can assess large amounts of code quickly
-
Is customizable
-
Provides consistent results
-
Requires significant investment in customization and fine-tuning
-
Developers question exploitability of findings.
-
Requires source or compiled code
-
Cannot detect environmental or third-party component issues.
Dynamic application security testing(DAST)
-
scalable to large web sites
-
High confidence in exploitability of findings
-
Less automation available for non-web technologies
-
Requires disabling of security components.
-
Unable to pinpoint root cause problems.
Fuzz testing
Insert bad data to attempt to crash the code
-
Low-cost but effective
-
requires running components or system
-
requires significant investment to harness test data
-
Requires human determination of pass/fail
-
Can generate high-volume of irrelevant findings
If you don't have a threat model:
- Create a list of high-risk items
- Consult the SDL-Agile list of high-risk code
- Determine relative exposure of each entry point
- Make necessary adjustments
Risk-based security testing
(Seems like expert testing, but related to security).
-
Depth of testing provides especially relevant findings.
-
Occurs late in lifecycle
-
Requires running sw
-
requires highly experienced testers with business-specific expertise.
Penetration Testing
- High confidence in findings
- Find configuration and environment issues other techniques don't
- Does not prove absence of bugs
- does not always identify root cause of issue
- happens late in dev cycle.
Interactive application security testing(IAST)
Uses debug tools
-
better coverages than DAST alone
-
More accurate than SAST
-
Not as deep as manual dynamic testing
-
Not as full analysis as SAST
new section
Risk management framework(RMF)
- Understand the business context
- extract and describe business goals, priorities and circumstances
- Identify the business risks
- Business risks directly threaten one or more of a customers bsiness goals.
- Quantify the possibility that certain events will directly impact business goals.
- Identify technical risks
- and map the tech risks, through business risks to business goals.
- a situation that runs counter to the planned desgn or implementation of the system under consideration.
- Synthesize and prioritize risks
- Define the risk mitigation strategy
- Fix the arifacts and validate
- Measure and report on risk
Tracing risk throughout the SDLC
- who
- requirements elicitation
- user story elicitation
- use case elicitation
- Identify and access management
- access control
- where
- High level arch
- platform def
- tech def
- design discussions
- Enterprise arch and integration discussions
- what
- user story elicitation
- use case elicitation
- Misuse/abuse case elicitation and functional requirements
- how
- security code review
- vul assessment
- external sec review
- impact
- vuln management workflow
- risk rating and exception processes
- release management
- migigation
- Secure design activities
- Vuln mgmt workflow
- release mgmt
- incident response
Foundations of Software Security Requirements
SW Sec Req
-
Negative circumstances:
- Describes what to do when something goes wrongs, like entering a wrong password.
-
Positive behavior
- Describe how to handle invalid input, not just saying: I dont want it.
-
Recognize the value of security in the reqierements phase
-
define the goals of sofware security reuquirements
-
Associate security requirements with the issues they address
-
Recognize that a software security requirement is just a type of requirement
Secure programming
-
Security through obscurity is a no-go
-
Formal process to handle product vulnerabilities
- Work with SCH team
- Have a fast track for particularly nasty risks
-
Learn from past mistakes
-
Use defense in depth instead of relying on firewalls alone
-
Patching is not sufficient protection
- Patch efficient, make it less of a hassle to the customer.
-
Security features can add business value
-
Security is both a product quality issue and a product differentiator.
-
Contain failures
-
Ensure features are constrained, don't give out info of erro....
-
Think like an abuser
-
Intended behavior, actual behavior, traditional bugs, most secrity bugs.
-
Tonny robbins test, Enforcefull statements.
-
Crypto faerie dust.
Security elements
- Confidentiality (privacy): The right information is available to only the right people. Can you validate user?
- Integrity: Information is correct, protected from unauthorized change. Only authorized people can modify the data or system.
- Availability: Systems and data are available when needed. Systems are resistant to denial of service attacks.
STRIDE Model
- Spoofing: Lying about identity.
- Tampering: Destroying or changing data
- Repudiation: Denying action has been done.
- Information disclosure: Copying sensitive/private data
- Denial of service: Blocking intended functionality
- Elevation of privilege: Obtaining unauthorized rights.
Understanding security
- Security is a characteristic of all products and systems.
- Confidentiality + Integrity + Availability = Trustworthiness.
- Security is an ongoing process to mitigate risks and correct discovered issues.
- Security is adversarial; attacks are continuous
- Think like a customer - what are their expectations
- Think like an attacker - how could a product be misused.
inform the Customer
- Provide customers with information on how to maintain security in regards to the product
- Tell the customer What are the security risks and consequences.
Roles in Product security
- Architects and designers
- Understand requirements and threats
- Design adequate countermeasures.
- Supply chain
- Ensure our vendors deliver secure products.
- Rapidly resolve discovered vulnerabilities.
Commitment to Quality
- Continue to improve and apply your security knowledge
- Strive for security at every point in the product life cycle.
- Business case, requiremetns, design, build, test, run
- Respond rapidly to discovered vulnerabilities.
Abuser profile
- Intelligent
- Creative
- Lots of time
- Bored
Product based security defense
- #1 Rule - You're never secure, you mitigate risk.
- Function - Security is a function, not a box
- Not a Silver Bullet - The answer is not always IPS/IDS or Firewall
- Parasitic Industry - A lot of high tech insurance FUD by security vendors selling appliances
- Splintered Market - with products becoming features in perimeter unified threat management appliances eventually to be eaten by switches and routers
- Host vs. Perimeter - Host based and perimeter based. Perimeter gives false sense of security
- Attacks are aimed at hosts - mobiles or directly at network devices
- Host based security - does not stop at OSS. every box with a CPU, memory, OS needs to be secured
Cost vs. Risk
- Remember Rule #1
- Understand the risk management mindset
- All risks can be mitigated for a price
- Security teams will always build "what if" attack scenarios. It’s their job
- Identify the likelihood of the threat
- What are the costs associated with risk mitigation
- Business leaders willing to accept risk?
- Remember business leaders make business decisions
- The "Gray Zone" of security
When you get wild scenarios, then estimate the price for mitigating it. Cost vs. Tradeoff.
Policy vs. Controls
- Roles, rules and responsibilities are policy
- The Customer decide the policies.
- Controls and engines give flexibility to meet different policies
- Every operator policy is different
- Best practices and templates can be supplied to give guidance toward policy
- Do not assume we are writing or enforcing policy
Reactive process
- Programs to identify exploits
- Vulnerability scanning / SQA testing
- CERT, Bugtraq, and other security sites
- Create database to track deployed software in all products
- Map database to customer contacts under maintenance for rapid communications
- Rapid response to customers with software updates to patch devices with ratings associated with affected vulnerabilities
- 24-hours to mitigate risk, bulk of time spent patching vs. tracking
Host Based Management Access Control Lists (ACL) (1st line of defense)
- ACL vs. Firewall - Management plane ACLs (not confused with ACLs in-line or stateful firewalls)
- No Stateful Requirement - Traffic destined to the host does not require stateful inspection engine.
- Protect Open Sockets - First line of host based defense to permit/deny/redirect specific source IP addresses to specific open TCP/UDP/ICMP ports that are listening on the host
- ACL Rules - Ability to permit, deny, redirect, log on a per access list line basis and can see matches per rule and export by Syslog or SNMP traps.
- Paradigm Shift - All IP services are disabled by default. Port scanning yields no ports open with default platforms.
Host Based Denial of Service (DoS) Protection (2nd Line of Defense)
- Intelligent Rate Limiting - not the same as host based IDS/IPS.
- Protect the CPU/memory - to survive and serve real requests vs. hacking requests. Intelligence vs. bulk shedding.
- 2 Buffer Schemes - Use of temporary buffers vs. single buffer ability to protect against draining the CPU/memory resources on the host.
- When syn comes in, put the ip address in a buffer, and don't take any more from that ip address untill the handshake is done.
- Unauthorized characters - are scrubbed (i.e. shell meta characters) and not allowed to get to the CPU
- Spoofed address tracking – showing number of packets matching in syslog/SNMP style logs/traps
AAA Client and Server Services (User-to Machine)
- AAA - Authentication, Authorization, and Accounting
- Centralized AAA - System can delegate the password policy with AAA client using TACACS+ or RADIUS
- Open Standards – Use of these protocols prevents you from being locked into a specific vendors implementation
- Local AAA - Optional fallback to local AAA services in case of remote reachability problems to central AAA servers
- Local Password Encryption - Admin passwords on local database encrypted using password encryption technique using one way algorithm
- Token Based Systems – AAA server can optionally communicate with systems on the back-end (e.g. RSA ACE Server)
Securing Management
- Securing Legacy Protocols – SSH vs. Telnet, SNMP v2c vs. SNMPv3, FTP vs. S-FTP, RCP vs. SCP, etc..
- TACACS+ Centralizes RBAC – can centralize management AAA, command level authorization, accounting and audit trails where Radius or Diameter cannot
- Local RBAC – 16 privilege levels, users can be restricted to specific contexts, command level authorization
- Emergency Recovery – needs to be standardized. Customers will lock themselves out from the systems.
- Configurable MOTD – Message of the Day banners must be configurable, no default message provided
- Evil Packets - Directed broadcast, ICMP redirects, ICMP unreachables, ICMP mask reply messages, proxy arp, and source routing can be disabled on the host
Machine-to-Machine Signaling
- Transport Layer Security - All signaling protocols between machines can optionally be wrapped with transport layer security
- Not VPNs - Not re-creating a VPN any-to-any secure tunnel topology
- Control vs. Data Planes – different uses depending on where and for what purpose this type of control is being used
- Transport Primitives – IPSec, SSL, TLS, SSH and MD-5 hash using PSK
Logging
- Fault Management - Network events are logged via Syslog or SNMP traps
- ACL Logging - ACL counters and logging (header recording)
- Packet sampling - to investigate attacks vs. promiscuous sniffing
- Debugging - Onboard extensive debugging capabilities used for troubleshooting security incidents
- Persistent Command Logging - keeps track of commands across power cycle
Perimeter Security
- Flawed - No such concept as "trusted" any longer
- First Line - Still can be a useful first line of defense in a layered security model
- Unified Appliance - Product specific appliances are becoming features in unified perimeter security platforms eventually into IP routers and switches
- No Substitution - Never use perimeter devices in place of host based security
- Part of a solution – reseller/integrator status
- Products vs. Function – Makes people believe that a box can solve their security needs – thereby vendors can make a product and an industry around security
Meet The Parasites
- Stateful packet/session inspection (Firewall) appliances
- Session Border Controller (Layer 5 firewall) appliances
- Virtual Private Networking (VPN) with IPSec/SSL concentrators
- Signature based Network Intrusion Prevention and Detection (NIPS/NIDS) Systems
- Network Behavior Anomaly Detection (NBAD) appliances
- Distributed Denial of Service (DDoS) Prevention appliances
- SPAM/SPIT (Spam over Internet Telephony) prevention appliances
- Anti-virus Detection appliances
Security Questions
- What are you protecting against?
- What is the likelihood of the attack?
- Do you always need a perimeter device?
- What host based controls make sense?
- How are those controls configured?
- Will it affect performance or usability?
- Are you enabling or mandating security policy?
- Are you secure?
Practical stuff
Authentication
Mitigating broken authentication
DES223
Broken authentication and session management vulnerabilities are those in which an attacker exploits flaws to impersonate other users either through session hijacking, session manipulation, or credential discovery.
These flaws are usually the result of poor session control and isolation, weak password recovery and account management functions, and inadequately secured transmission or storage of user credentials.
Forgotten password functionality
Dos
- Always use TLS-encrypted forms for user login
- Validate form input to help users avoid mistakes
- consider countermeasures for handling brute-force attacks and credential harvesting
- consider supporting third-party authentication providers (e.g. Google or Facebook.)
- Use generic error messages, as too much information can be used by attackers.
- Always provide users with a logout button to manually terminate a session.
- A set of strong password policies will help secure your users’ passwords and data.
- Implement strong yet usable and practical password complexity requirements, including fixed but reasonable expiration dates.
- Always notify users of password changes via email or SMS, but never send the actual passwords.
- Always ask for the previous password when setting a new password, to ensure that not just anybody can reset a user’s password.
- Expiring all current sessions after changing passwords can help with this.
- Finally, provide two-factor authentication features for sensitive applications.
- Hardware devices, software tokens, or SMS one-time-passwords greatly enhance account security.
- Session tokens are another potential avenue for attack, and should be secured.
- Store session identifiers in a generic variable that does not allow fingerprinting or profiling.
- Ensure that the session ID is sufficiently long and is created using strong random number generators, as this makes it harder to brute force or simply guess for an attacker.
- The session ID should only contain a single session identifier and never contain any other identifying information.
- Always store session identifiers in cookies and never rely on sending session IDs via URL parameters, hidden form fields, or custom HTTP headers.
- Generate new session identifiers:
- after user login,
- immediately after privilege escalation or role change
- after sensitive operations such as password changes.
- Always set both absolute and relative time limits on session identifiers to ensure proper session expiration.
- use well-tested framework or built-in platform session management features instead of implementing your own.
- Only store session identifiers in session cookies.
- Always set the Secure cookie attribute to ensure that the application always transmits cookies over secure connections.
- Always set the HttpOnly cookie attribute to ensure that scripts cannot access cookies via the DOM document.cookie object.
- Always send user credentials and session tokens over secure encrypted channels, even for private internal communications.
- Always use certificates signed by an organizational certificate authority for private intranet applications and by a recognized and trusted certificate authority for public applications.
- Familiarize yourself with regulations and standards required for your organization and industry.
Dont's
- Avoid pop-up windows that do not show the address bar and TLS validation icon.
- never use hidden form fields for storing authentication-related information.
- avoid using the remember me functionality with high-value applications,
- Do not automatically assign temporary passwords.
- Never accept user-provided session identifiers if the application did not generate that ID for the user.
- Never transmit session-related content over non-TLS connections, including creative assets such as style sheets and graphics.
- Never store session-related values in persistent cookies.
- never reveal sensitive information in URL. Although TLS content is encrypted, the URL itself is not, and it should never reveal sensitive information.
- Never use self-signed certificates, even for internal applications.
- Do not mix secure and non-secure content in secure areas of the application.
Sensistive data exposure
DES224
- Key derivation functions increase the size and entropy of a password before hashing to accomplish:
- These algorithms perform cryptographic functions over many hundreds of thousands of iterations, applying a salt or master key in each iteration.
- Note that key derivation functions do not compensate for weak passwords.
- Hinder brute-force attacks by increasing the cost in both CPU cycles and memory overhead
- Increase the bit length and entropy of short passwords
- Reduce exposure to cryptanalytic and timing attacks by working from keys of standard length and high entropy
- Add additional salt and optionally a master key or password
- https://en.wikipedia.org/wiki/Key_derivation_function
- These algorithms perform cryptographic functions over many hundreds of thousands of iterations, applying a salt or master key in each iteration.
Dos
- Encrypt sensitive data.
- Use up-to-date strong algorithms.
- Use sufficient key lengths.
- Use salt with password hashes, and poor key management.
- Use good key management.
- Best practices for protection of the encryption keys.
- Never hard-code encryption keys in your application.
- Carefully plan file system permissions to protect files that contain encryption keys.
- If possible, store encryption keys outside of the web content directories.
- Build the application to support periodic key changes and establish a regular schedule for changing keys.
- Do not include encryption keys in backups; instead, back up and store them separately.
- Use TLS v1.2+, not SSL.
- Use TLS for all communication, even internal
- Disable HTTP and implement HSTS
- Disable SSL, only allow TLS and disable TLS 1.0.
- Disable TLS compression and renegotiation initiated by client
- Use the Secure flag on all authentication cookies
- Keep session tokens off the URL
- Use vulnerability scanners
- Use X.509 certificates with an RSA or DSA key with more than 2048 bits.
- Store private keys in a secure location, never on the server itself.
- Keep certificates up-to-date and include all applicable domain names so that the user is never presented with a certificate error.
- Use extended validation (EV) certificates if appropriate for your organization. Consider using HSTS, HTTP Public Key Pinning, and OCSP Stapling.
- Use HSTS, HTTP Public Key Pinning, and OCSP Stapling
Dont's
- Don't use homegrown algorithms.
- Dont use weak, out-of-date algorithms.
- Dont use weak random number generation.
- Never use a cipher suite with NULL encryption as these do not encrypt the TLS traffic.
- Never use a cipher suite with Anon as these do not verify certificates.
- Never use a cipher suite with an EXPORT-grade cipher as these only provide 40- or 56- bit security.
- Never use a cipher suite that includes RC4, MD5, DES, or 3DES.
- Never use self-signed certificates. They provide little security over non-encrypted communications and provide a false sense of security.
- Do not use X.509 certificates signed using MD5 hash, due to known collision attacks.
Cipher suites
A server and client will negotiate a cipher suite that both ends can support and have enabled.
- A cipher suite is composed of four algorithms:
- a handshake or key exchange algorithm
- used to exchange the symmetric encryption key and is based on public key cryptography.
- Out of the ephemeral handshake algorithms, the Elliptic Curve Diffie–Hellman Ephemeral (ECDHE) implementation provides the best performance, especially when combined with the Elliptic Curve Digital Signature Algorithm (ECDSA) for authentication.
- Beware of performance hits.
- Handshaking Algorithms with perfect forward secrecy: ECDHE, DHE
- Handshake algorithms without perfect forward secrecy: ECDH, DH
- an authentication algorithm
- used to verify the certificates.
- There are two main choices for authentication algorithms: Elliptic Curve Digital Signature Algorithm (ECDSA) and RSA. This choice is largely dictated by the Certificate Authority that issues the certificates.
- an encryption algorithm
- a symmetric encryption algorithm that is used to encrypt the communication itself.
- The AES in Galois Counter Mode (AESGCM) is strongly recommended, but AES-256 and even AES-128 are currently acceptable.
- a hashing algorithm.
- hash messages to assure integrity by preventing tampering.
- The recommended algorithms are SHA256, SHA384, and SHA512.
- a handshake or key exchange algorithm
LFS260 k8s sec notes
- Basic principles
- Assessment
- Determine the value of assets and the cost of implementing security to protect those assets.(sounds like 'threat modeling' fits in here.)
- Prevention
- Detection
- Detection involves monitoring through the use of various technologies, such as remote logging, system statistics, and performance metrics.(LFS260)
- Intrusion Detection and Prevention Systems (IDPS) are used to identify possible incidents, create a consistent audit trail, and report attempted intrusions(LFS260).
- Reaction
- Assessment
- A passive attack attempts to learn or make use of information from the system, but does not affect system resources; it compromises Confidentiality(LFS260).
- An attack is referred to as active when it attempts to alter system resources or affect their operation so it compromises Integrity or Availability(LFS260).
- The 4Cs of Security
- Code
- Container
- Cluster
- Cloud
NIST Cybersecurity framework
- Identify
- Asset Management
- Business Environment
- Governance
- Risk Assessment
- Risk Management Strategy
- Supply Chain Risk Management.
- Protect
- Detect
- Respond
- Recovery
- Identity management, credentials, and access
- Ongoing Administration
- Risk management process
- Responding to an security issue
- Securing data
- Network Security
- System Security
- Application Security
-
wget https://training.linuxfoundation.org/cm/LFS260/LFS260_V2023-06-22_SOLUTIONS.tar.xz --user=XXX --password=XXX
-
tar -xvf LFS260_V2023-06-22_SOLUTIONS.tar.xz
-
gVisor is a Go-written sandbox often used along with rule-based execution tools such as SELinux or seccomp.
-
Kata leverages hardware virtualization, to provide per-container kernels.
-
TUF - The Update Framework - designed to provide a verifiable record of the state of the software to avoid attacks which provide older or unapproved files. TUF is a graduated project of CNCF
-
Gatekeeper also have a validating admission controller which handles connection from the API server to OPA.