2024 Cyber @ Duke: Cybersecurity in the Age of AI - JoeyTaubert/Cyber-Summits-Conferences-Talks GitHub Wiki
Global AI Threat Landscape
- Kate Naunheim - Senior Consultant Director @ Palo Alto Unit 42
- Ryan Linn - Exec Dir of Cyber Threat Management @ Wells Fargo
Tools Used
- MITRE ATT&CK
- MITRE ATLAS
- Voice deepfake (missed the name)
TL;DR
Provided by ChatGPT
Key Risks from AI Adoption
- Shadow IT: Growing use of generative AI without governance.
- Expanded Attack Surface: AI and cloud increase exposure to attacks.
- Governance Gaps: Essential to establish policies on AI usage and boundaries.
- Data Exposure: Lack of visibility into data processed by LLMs.
- Increased Recon Capability: AI enhances data gathering for attacks.
- Novel Attacks: Phishing and system exploitation enhanced by AI.
Key Threats from AI Usage
- Accelerated Attack Capability: AI speeds up and scales attacks.
- New Vectors: AI can discover vulnerabilities in common software libraries.
- Sophisticated Phishing: Improved precision and realism in attacks (e.g., voice deepfakes).
- Precision Targeting: AI allows simultaneous, highly targeted attacks.
- Vulnerable Code Generation: AI may propagate insecure coding practices.
- Potential Exploits: RCE possible through manipulation of Python pickling.
Defending Against AI-Driven Attacks
- TTP Awareness: Recognize that AI-driven attacks build on classic TTPs.
- Standard Tools Apply: Many existing security tools and practices remain effective.
- ATT&CK Framework: Use to map and mitigate AI-related techniques.
- Standard Security Practices: Protect AI systems as any other critical components.
CISO Actionable Checklist
- AI Acceptable Use Policy (AUP): Define acceptable AI usage.
- Discovery Exercise: Scope out AI assets and usage.
- AI Allowlist: Identify and enforce allowed AI tools.
- Browser Controls: Manage browser extensions and third-party AI tools.
- Risk Register: Track and manage identified AI risks.
Key Opportunity
- AI for Defense: Use AI tools to counter AI-driven threats.
Notes
Risks Arising From Use of AI
The time frame for adopting AI is much shorter than adoption of other technologies. It comes with risks and opportunities.
-
Shadow IT is exploding, employees are using gen AI without consistent guardrails.
-
Increasing attack surface in the cloud
-
AI Risk - Governance policies need to be established defining what is acceptable and what is prohibited
-
Data Exposure - Lack of visibility into what data is piped in and out of LLMs
-
Least Functionality - Every additional tool presents another attack surface
-
AI could be used to more rapidly perform recon/data analysis
-
AI Net New Attacks - Phishing campaigns leveraging AI to identify system weaknesses?
Threats Arising From Use of AI
-
AI can be used to accelerate and scale attacks
-
AI can find new attack vectors (e.g. finds an issue in a common library)
-
Voice deepfakes, phishing that doesn't have bad grammar
-
When given access to desktops, AI can execute many simultaneous attacks (via accelerated data collection)
- Precision attacks vs. crime of opportunity
-
There is a lot of bad code on the internet (PHP) and an AI may give vulnerable code because of this
-
OWASP Top Ten for LLMs
-
JPGs have a footer where you can put malicious code
- Asking AI to code this can work if jailbreaking outside the restrictions
-
LLMs are usually Python on the back stitched together with pickling
- pickle has many available exploits to achieve RCE
Protecting Against AI-powered attacks
-
Most AI attacks are building on classic attacks, but this could change
-
Remember TTPs, as they will still apply
-
Tools as well, mostly the same as classic attacks
-
Use an ATT&CK navigator to fingerprint techniques
- Apply mitigations
-
Protecting AI is like protecting other components
CISO Checklist
- Develop an AI AUP
- Perform a discover exercise to scope AI
- Identify an allowlist for specific AI tools AND ENFORCE IT
- Control browser extensions and tooling
- Develop and manage a register of AI risks
AI can solve more problems than it creates
AI tooling to combat AI threats.
Ransomware Threats in the Age of AI
- Shane Stansbury -
- Billy M. Evans, Jr. - COO @ Kivu Consulting
- Matt - Lawyer/Partner @ Kistange
TL;DR
Provided by ChatGPT
Offensive Use of AI
- Deepfake Ease: Quick, high-quality deepfake creation is feasible with minimal input.
- Ransomware as a Service: AI streamlines attack stages, allowing small teams to execute complex attacks.
- Small Business Targeting: Smaller organizations are prime targets, with larger ones targeted for high-profile attacks.
- Enhanced Spearphishing: AI generates highly believable emails, making traditional detection ineffective.
- Recon Capabilities: AI performs in-depth company research, aiding adversaries in negotiation tactics.
- Tip: Regularly check what LLMs can access about your company.
- Sophisticated Phishing: Multi-layered attacks, e.g., AI-assisted calls following fake MFA prompts.
Defensive Challenges and Considerations
- Resource Gap: Small businesses lack AI defense capabilities similar to large corporations.
- Incident Response (IR): AI in IR raises questions around control, precision, and oversight.
- AI Acceptable Use Policy (AUP): Critical for preventing accidental insider threats.
- User Training: AI can support tailored cybersecurity training programs.
Post-Incident Review
- Policy Documentation: Ensure well-documented AI and security policies, demonstrating consistent implementation.
Notes
Deepfakes
Recorded 3 mins of video, AI processed for 5 minutes, provided a text prompt. Got an insanely good deepfake video.
Offense
Ransomware is a business model. There are teams handling the different stages of the attack. Some of these stages need less resources and less individuals.
Usually affects smaller businesses. The big ones in the news are the outliers, but are more targeted. AI will run large scans to find opportunities for initial access.
Spearphishing - AI is making it really difficult to detect. We used to be able to tell by the naked eye, but it is not viable anymore. AI systems to detect MALICIOUS AI generated emails. Some people are using AI to generate legit emails.
AI can help with threat actor research on your company (revenue, insurance, etc), which can influence their negotiation strategy. We now have a third party at the negotiation table.
💡 See what LLMs know about your company
AI calls for harassment purposes.
Fake MFA prompt email/notif > soon after AI phone call to user about suspicious activity > leads to compromise
Defense
One huge issue is that small businesses do not have the resources to stand up AI defenses like larger companies would. If they are having trouble with MFA how do they implement an AI safeguard???
There is a lot of thought that goes into IR. What effect would AI have if entrusted with IR responsibilities?
AUP - Preventing employees from using AI at will, or accidentally becoming an insider threat
AI to train users on a more personal level.
When investigating a company after an incident, they look at what your policies are, and how you implemented them.
AI Cybersecurity Enablement Techniques
- Brinnae Bent - Duke Grad & Professor
- Dr. Michael Roman - MAXISIQ Executive Director
- Michael Reiter - CompSci Professor
- Vinay K. Bansal - CTO @ Cisco
TL;DR
Provided by ChatGPT
AI Implementation in the Organization
- Enhanced Output: Leverage AI within existing tools for improved productivity.
- Workflow Automation: Identify and automate safe, effective workflows using AI.
- In-house LLM Development: Considerations include prompt injection, data leakage, and data integrity (avoiding poisoning).
- Privileged Access: Limit model access to sensitive data during training and usage.
- Safe LLM Development: Train developers in secure LLM design, similar to secure coding practices.
Defensive AI Capabilities
- AI for Advanced Threat Detection: Defensive AI is essential as adversarial AI improves attack sophistication.
- Automated Vulnerability Detection: AI agents can identify misconfigurations or weaknesses.
- Honeywords: Use AI to create realistic decoy credentials as a breach detection tactic.
Generative Adversarial Networks (GANs)
- GANs in Security: AI models train by attempting to deceive each other, improving detection and prevention capabilities.
Notes
Using AI In the Org
-
Understanding how to leverage AI to make outputs better for employees
-
Using the AI that is built into tools we use
-
Finding out what workflows/frameworks can be automated safely and effectively with AI
-
Developing your own AI/LLM for internal use
- Need to worry about data leakage from prompt injection, jail breaking, etc
- Providence of the data used to train these models, avoiding poisoning
-
Does your model have privileged access to the data it is being trained on.
-
The same way we taught developers how to write safe web apps, we need to teach developers how to write safe LLMs
-
AI will make attacks more sophisticated, so defense needs to also use AI to become more sophisticated
-
Agents controlled by AI could find vulnerabilities/misconfigurations for you
-
honeyword - fake credentials that, if ever used, would indicate a breach
- Use AI to generate these artifacts that are more convincing than human created artifacts
GAN - Generative model and discriminator that trains each other. Generator tries to fool the discriminator.
Cyber Risk Mitigation in the Insurance Industry
- Heather Osborne - Director @ NetDiligence
- Marc Bleicher - CTO @ Surefire Cyber
- Max Perkins - COO @ Spektrum Labs
- Jeffrey White - SVP @ RT ProExec
TL;DR
Provided by ChatGPT
Practical AI Threats for Businesses
- Ransomware Impact: Small to medium businesses are targeted in 98% of ransomware claims.
- Insurance and Compliance: Insurers may reject claims if agreed-upon controls are not in place during underwriting.
Key Threat Categories
- Current: Phishing, social engineering (including deepfakes), extortion, and impersonations.
- Emerging: AI-driven malware, automated target identification, expanded attack surface, data exfiltration, and analysis.
- Future: Data manipulation, advanced vulnerability research, zero-day exploit generation, and end-to-end attack automation.
Capital and Cybersecurity Investment
- IT Investment ROI: Improved IT stack and security controls increase insurance appeal and ROI.
- Insurance Capacity: Limited cyber insurance capital is expected to face higher demand in the coming decade.
Notes
Practical AI Threats
98% of ransomware claims come from small and medium businesses.
At the time of underwriting cyber insurance, agreed upon controls should be implemented or else a high claim may not be paid out.
Current
- Phishing
- Social Engineering (depfakes)
- Extortion schemes
- Impersonations
Emerging
- Malware Development
- Target Identification
- Scaled Attack Surface
- Data Exfil and Analysis
Future
- Data Manipulation
- Vulnerability Research
- Zero-day Exploit Generation
- Full Attack Chain Automation
Accessing Capital
Investment in your own IT stack is critical, but what is the ROI??
Better insurance = Better ROI
There is a finite amount of captial in the insurance industry. There is only a certain amount allocated for cyber insurance, based on the number of buyers. Buyers will increase in the next 10 years.
Keynote Talk: Transformative Effects of AI on NATO
- Dr. Manfred Boudreaux-Dehmer - Inaugural CIO @ NATO
TL;DR
Provided by ChatGPT
NATO Overview
- 75 Years, 32 Nations: Protects members' freedom/security through political and military efforts.
- Cyber Domain: Integrated as a core defense area alongside land, air, sea, and space.
Evolving Global Cyber Threats
- Russia: 800% cyberattack increase post-Ukraine invasion; disinformation tactics.
- China: Largest threat, with 40 known APT groups.
- North Korea: Cybercrime as a sanctions workaround.
- Iran & Hezbollah: Rising threat profile in cyber activities.
Strategic Cyber Defense Approaches
- Damage Attacker: Options include offensive cyber operations, public attribution, and sanctions.
- Deny Benefits to Attacker:
- Risk Awareness: Protect critical assets ("crown jewels").
- Environment Knowledge: Automated asset and patch management.
- Advanced Detection: AI-powered anomaly detection and incident analysis.
- Infrastructure Resilience: Data tagging, immutable backups, tamper-evident logs, quantum-resistant encryption.
- Zero Trust: Emphasize micro-segmentation and multi-factor authentication.
Infrastructure and Scalability
- Adaptive Security: Cloud-native scaling for DDoS, real-time network layout adjustments.
- Integrated Threat Intel: SOAR, real-time threat data.
- Machine Learning: Anomaly detection, predictive analysis.
- Security Simplification: IaC, service mesh, and cloud security posture management.
- User-Centric Security: Contextual access policies, PET (e.g., differential privacy, homomorphic encryption).
NATO’s AI Use Case
- AI in Monitoring: Deployed mainly for network and user activity monitoring.
Notes
NATO
75 years old, 32 member nations.
Purpose is to guarantee freedom and security of its members through political and military means.
Cyber is embedded in NATO's core domains:
- Land
- Air
- Sea
- Space
- Cyber
The Evolving Threat Picture
800% increase in cyber attacks from Russia immediately after invasion of the Ukraine. Massive disinformation campaigns as part of this.
China is the largest global threat, holding 40 of the APT groups.
North Korea is also a player. They try to bypass sanctions through cyber crime.
Increased in threats tied to Iran and Hezbollah.
Dynamics in This Ecosystem
Attacker investment and risk = low
Defender investment and risk = high
-
Method 1: Inflict Damage on the Attacker
- Offensive cyber operations, options for nations
- Public attribution of attacks to an actor or nation - technically difficult and politically sensitive
- Sanctions
-
Method 2: Deny Benefits for Attacker (Also ties into preparing for an incident)
- Risk awareness
- Know your "crown jewels"
- Knowledge of your environment
- Automated Asset, Configuration, and Patching Management System
- Information Hyper-Triangulation
- "Put your SOC on steroids" through AI, needs to become more efficient every day.
- Spot anomalies and identify attack vector
- Uncover and learn from anomalous behavior
- Infrastructure Resilience
- Data tagging (level of confidentiality)
- Immutable backups
- Blockchain for tamper-evident logging of critical events
- If you cant trust your logs, who do you trust???
- Advanced encryption (quantum resistant)
- Zero Trust
- More of a mindset rather than a technology
- Micro-segmentation
- Build upon SDN and similar technology
- Authentication between micro-segments
- Strong authentication mechanisms (MFA, biometrics)
- Risk awareness
Infrastructure
Scalability
- Adaptive security
- Cloud-native auto-scaling for DDoS mitigation
- Adjust and refine network layout to adapt to an incident (without human intervention)
- Threat intelligence integration
- SOAR for scalable IR
- Threat intel for real-time data
- Machine Learning
- Anomaly detection system based on unsupervised learning algorithms
- predictive analysis
Simplification
- Security Simplification
- IaC
- Service mesh architecture
- Cloud security posture management
- User-centric security
- Contextual access policies based on behavior analytics
- Privacy-enhancing technologies (PET)
- Differential privacy
- Homo-morphic encryption
- AI-generated synthetic data
NATO uses AI primarily for network monitoring and some user monitoring.
AI and US Cyber Policy
- Stefani Jones - Senior Policy Advisor @ CISA
- Hans Nelson - Cyber Policy Advisor @ NATO
- David Hoffman - Senior Lecturing Fellow @ Duke
TL;DR
Provided by ChatGPT
Malware Information Sharing
- MISP: NATO promotes the use of the Malware Information Sharing Platform for enhanced collaboration.
Threat Landscape Overview
- Speed of Attacks: Defenders must act faster than attackers; CrowdStrike's average breakout time is now 62 minutes, down from 150 minutes three years ago.
- Rapid Exploitation: Fastest breakout recorded at 2 minutes and 7 seconds; most security teams cannot respond in under 2 minutes.
Prolific Threat Actors
- Famous Chollima: Utilizes AI for insider behavior analysis, exploiting remote hiring processes for infiltration.
- Scattered Spider: Employs non-standard tactics and LLM-generated scripts to bypass MFA and execute phishing.
- Punk Spider: Specific tactics not detailed.
- Static Kitten: Specific tactics not detailed.
- AI's Role: Faster exploitation of zero-day vulnerabilities; adversaries leverage AI for election disruption through misinformation and deepfakes.
CrowdStrike’s AI-Enabled Defense
- AI-Native Detection: Real-time, ML-driven malware classification for swift threat response.
- AI-Powered Vulnerability Management: ExPRT.AI prioritizes vulnerabilities by risk level.
- Automated Investigations: Charlotte AI generates reports to reduce workload and investigation fatigue.
Emerging Trends in Cyber Threats
- Enhanced Recon: AI improves reconnaissance for attackers.
- Novice Criminal Accessibility: AI lowers barriers to entry for cybercrime.
- AI-Powered Malware: Evasion tactics evolve to avoid detection.
- Improved Deepfakes: Quality of deepfakes continues to advance.
- Automated Vulnerability Discovery: AI accelerates vulnerability identification.
- AI-Enhanced Ransomware: Increased efficiency and personalization in ransom demands.
Recommended Defense Strategies
- Invest in Human Expertise: Focus on training and skill development.
- Enhance Visibility and Detection: Strengthen monitoring and alert systems.
- Increase Adversary Costs: Make cyber attacks less appealing.
- Automate Responses: Leverage automation to keep pace with threats.
- Achieve Comprehensive Control: Strive for full oversight of security operations.
Notes
Most of the talk had cool stuff but I did not take notes.
Malware Information Sharing Platform (MISP) - NATO uses this and wants more people to use it.
The AI-Fueled Threat Landscape
- Tom Etheridge - Chief Global Services Officer @ CrowdStrike
Investing in training in understanding the tools, TTPs, intel, etc, to be able to take advantage of AI for defenders
Threat Landscape
AI is a double-edged sword. Defenders need to act as fast, if not faster, as the attackers.
Breakout time - The time it takes from an initial entry for a threat actor to the time they move laterally.
CrowdStrike's breakout time for this year is 62 minutes. 3 years ago it as about 150 minutes. Threat actors are getting better and faster.
- They weaponize your tools and accounts. They use valid accounts and tools
- Fastest breakouttime: 2min 7sec
- Nearly all security teams are not equipped to respond in less than 2 minutes
Today's landscape is more complex and more dangerous than ever before.
Four groups that are most prolific ATM:
- Famous Chollima
- Uses AI to analyze insier behavior and identify potential weak links (stealing identities and get a job for weapons trafficking for the DPRK)
- Malicious actors are taking advantage of remote interview processes to get jobs at companies
- Right before the job starts they say they moved and ask them to ship the laptop (to a drop location)
- 150 companies CrowdStrike had to reach out to for this
- Uses AI to analyze insier behavior and identify potential weak links (stealing identities and get a job for weapons trafficking for the DPRK)
- Scattered Spider
- they dont follow typical playbooks, they change tactics (scattered)
- LLM-Generated Scripts
- To bypass MFA (remove EntraID MFA) and send phishing attacks
- Punk Spider
- Static Kitten
The speed between known zero day and exploitation has increased with the use of AI.
Adversaries are using AI to disrupt global elections. Used for misinformation campaigns, social engineering, deepfakes.
CrowdStrike's AI-Enabled Defense
- AI-Native Detection
- ML-driven malware classification from the sensor to the cloud
- Analyzes files and behavior in real time to stop threats faster
- AI-Powered Vulnerability Management
- Prioritize vulnerabilities with ExPRT.AI to focus on the highest risk
- Automated Investigation with Charlotte AI
- Write reports to reduce investigation fatigue
Emerging Trends
-
AI improves recon capabilities
-
AI lowers the barrier for novice criminals
-
AI-Powered Malware to evade detection in real-time
-
Deepfakes will continue to get better
-
AI automates vuln discovery
-
AI-Enhanced ransomware for efficiency and personalized ransoms
Strategies to Stop the Breach
- Invest in human expertise
- Prioritize visibility and detection
- Increase adversary cost, not yours
- Automate at machine speed
- Achieve full control
Keynote Talk:
- Bryan Palma - CEO @ Trellix
TL;DR
Provided by ChatGPT
Generative AI Overview
- Beyond ChatGPT: Encompasses text, video, sound, design, flows, and conversational outputs.
LLM Attack Vectors
- Key Risks:
- Hallucinations
- Sensitive data leakage
- Data poisoning
- Phishing
- Dark LLMs (e.g., XXXGPT, DarkBARD AI)
- Deepfakes
CISO Insights
- Sustainability Issues: 79% find keeping up with regulatory changes unsustainable.
- Board Reporting: 49% report to the board weekly (15% daily), impacting focus on defense.
- CISO Future Uncertainty: 49% doubt the future of the CISO role amid expanding responsibilities.
- AI Optimism: 91% are excited about AI's potential benefits.
AI Initiatives in the SOC (Trellix Wise)
- No Alert Left Behind: Comprehensive alert management.
- Automated SOC Workflows: Streamlined investigation and response processes.
- Analyst Efficiency: Aiming for 5x improvement.
- MTTD/MTTR Reduction: Targeting a 50% decrease in mean time to detect and respond.
Future Challenges
- Outer Space Security: Need for preparedness in securing outer space.
- Machine Conflict: Potential for GenAI to outsmart humans, leading to "good guy machine" vs. "bad guy machine" scenarios.
Notes
Generative AI Primer
GenAI is more than just ChatGPT.
- Text
- Video
- Sound
- Design
- Flows
- Conversational
LLM Attack Vectors
- Hallucinations
- Sensitive Data Leakage
- Data Poisoning
- Phishing
- Dark LLMs
- XXXGPT
- DarkBARD AI
- Deeo Fakes
Shadow Syndicates - The convergence between the hacktivists, criminal gangs, and nation states.
Cyber Titans - Fusion between a titan of industry and a cyber specialist
- With the rise of "shadow syndicates", we must work with
Mind of the CISO (Study)
- 79% say the time and effort to keep up with regulator change is not sustainable
- 49% report to the board at least on a weekly bases (15% daily)
- Concerning due to the fact that CISOs need to focus on defending the company
- 49% do not see a future of CISO due to the ever-expanding responsibilities
- 91% are excited about the prospect and opportunities that AI can provide
Trellix Wise
- No alert left behind
- Automate SOC investigation and response workflows
- Improve analyst efficiency by 5x
- Reduce MTTD and MTTR by 50%
The future is complicated:
- Outer space effects every company, and we are not prepared to secure outer space
- Machine-on-machine: Good guy machine vs. bad guy machine (when GenAI becomes smarter than humans)
AI Challenges in Industrial Cybersecurity
- Jonathan Tubb - Adjunt Professor @ Duke
- Dustin Pogue - Adjunct Professor @ Duke
- Pia Capra - Director of OT @ Booz Allen Hamilton
TL;DR
Provided by ChatGPT
OT Definition
- Operational Technology (OT): Systems affecting physical processes, focusing on cyber-to-kinetic impacts.
Key Characteristics
- Uptime is Paramount: Reliability is critical.
- Legacy Equipment: Many systems are outdated and insecure.
- Data-Driven Connectivity: Increasing reliance on data for decision-making.
- Regulatory Compliance: Must adhere to Good Practice standards.
Common Applications
- Industries: Oil & gas, electric power, manufacturing, water utilities, commercial facilities, and defense.
IT vs. OT Dynamics
- Conflict: IT focuses on data, while OT systems are designed for process control.
- Static Nature: OT systems excel in anomaly detection but require careful change management.
AI in OT
- Challenges: Training AI is complex due to human involvement.
Examples of AI Use
- Pepsi Co.: AI for quality control in Cheetos.
- Auto Manufacturing: AI vision ensures precision in stamping.
- LLMs for Troubleshooting: AI assists in troubleshooting OT systems.
Notes
What is OT?
Operational Tecnology are the systems affecting a physical process.
- Cyber-to-Kinetic
- Uptime is Paramount
- Legacy & "Black Box" Equipment Pervasive
- Insecurity By Design
- Data Driven Connectivity
- GxP/Regulatory Requirements
Present in:
- Oil & Gas Production/Distribution/Refining
- Electric Power Generation/Transmissio/Distribution
- Manufacturing/Logistics/Distribution Centers
- Water/Wastewater/Natural Gas/Public Utilities
- Commercial Facilities/Data Centers
- Department of Defense/Weapons Platforms
IT vs. OT Conflict - IT looks at systems, OT was originally designed to not be computers. OT is built to run a process.
OT is very static, so anomaly detection is very good. However, you can't just take an OT device offline or just take action on it.
Training AI for OT use is hard because of the human element of physical OT maintained.
💡 Pepsi Co. is using AI for quality control for Cheetos. 💡 Auto manufacturing can use AI vision to ensure the metal stamping was cut correctly
LLMs could read a manual and become an expert to help troubleshoot.
AI Enablement of Third-Party Risk Management
- Sachin Bansal - President @ SecurityScorecard
- Ramana Chamarty - Chief Security Architect @ Fidelity
TL;DR
Provided by ChatGPT
TPRM Definition
- Third-Party Risk Management (TPRM): The process of identifying, assessing, and mitigating risks associated with vendors, customers, and the supply chain.
Key Challenges
- Vendor Proliferation: Increasing number of vendors leads to information overload and a rise in the volume of questions directed at them.
AI in TPRM
- Information Processing: AI can transform overwhelming data into actionable insights for decision-making.
- Efficiency Improvement: AI tools can reduce vendor analysis times and help interpret responses, especially in cases of misleading information.
- Real-Time Monitoring: Use AI for continuous oversight of vendor vulnerabilities and breaches.
Considerations
- AI Usage: Assess whether your organization utilizes AI tooling for TPRM (e.g., LLMs querying vendor data).
- Vendor Responses: Consider if vendors will employ AI to answer questions effectively.
Approach
- Attend, Assess, Act: A structured framework for managing third-party risks.
Notes
What is Third-Party Risk Management (TPRM)
What organizations do when organizations are identifying, assessing, and mitigating the risk associated with vendors, customers, etc.
💡 Also related to supply chain
The # of vendors skyrocketing, amount of questions we are asking vendors skyrocketing. Leads to information overload. AI can help turn this information into answers to influence decisions.
Does your organization have AI tooling? How is it being used? (LLM fed all SRAs for vendors and can just query it)
💡 LinkedIn, by default, is collecting data on you for their GenAI model and you have to go to your settings to turn it off
AI could help bring vendor analysis times down. But it needs to be able to interpret vendor responses/attestations, especially when there is BS involved.
Will vendors use AI to answer questions??
Real-time monitoring of vendor vulnerabilities/breaches.
Attend, Assess, Act.
AI and Cybersecurity Privacy
- Jane Horvath - Co-Chair for Privacy, Cybersecurity, and Data Innovation @ Gibson, Dunn & Crutcher
- Rupal Kharod - Leader @ Recorded Future
Notes
RFC for banning open source AI?? Decided it was not a good idea.
In the absence of laws and regulations, we need governance processes and policies.
When developing AI models, look at the possible risks around harm that could be done. Is it making decisions that could impact someone's life.
AI Through the Eyes of the CISO
- Brian Reed - Senior Director of Cyber Strategy @ Proofpoint
- Shanika Norville - CISO @ Marken
- Jen Anthony - Vice President @ Think|Stack
- Nick Tripp - Interim CISO @ Duke
TL;DR
Provided by ChatGPT
Key Points
- Purpose of AI: Focus on solving specific problems rather than using AI for its own sake.
Internal Use Cases
- Efficiency: AI can enhance the response speed of higher-level engineers by providing proof of concept tools.
External Use Cases
- Data Analysis: AI can perform predictive analysis on sensitive data to identify potential financial issues among users or clients.
Threat Detection
- Anomaly Detection: Use AI for detecting threats, but ensure models are continuously retrained to avoid false negatives and positives.
- SecOps Transformation: AI changes SecOps by enhancing threat detection and vulnerability identification, requiring ongoing model training and validation.
Deepfake Detection
- Need for Solutions: Explore AI’s potential for developing methods to detect deepfakes effectively.
Notes
Lots of great reports & research available at proofpoint for viewing.
Conversation typically start at "our product uses AI and that's why it is better."
How AI is Impacting Our Cybersecurity Functions
We dont want to use AI because its cool. We need to ask what problems we are trying to solve:
- Internally - Efficiency where manpower is lacking
- Proof of concept to use a tool to allow higher level engineers to respond to a threat faster
- External - Lots of data, and sometimes this is mostly sensitive. Can do predictive analysis on this data to identify any users/clients who may be on the cusp of a financial issue.
Use AI for threat/anomaly detection. Be careful, some people go wrong because they do not continuously retrain the model. False negatives are just as much of a problem as false positives.
Does AI change the way we do SecOps? Yes, use it for threat detection and identifying vulnerabilities. But need to constantly retrain and program these models and CHECK the results.
Deepfakes need to be detectable some way, maybe AI can be used for this.
💡 Stay curious
Cyber Threats and Trends
- Jessica Nye - Special Agent
- [email protected]
- 919-466-1379
TL;DR
Provided by ChatGPT
Key Cyber Threats for 2024
- Business Email Compromise (BEC)
- SIM Swapping
- Ransomware
- Third-Party Exposure (Supply Chain)
- Virtual Currency Theft/Scams
- Advanced Persistent Threats (APTs)
- Artificial Intelligence (AI) Threats
- Poor Cyber Hygiene
Cyber Strategy
- Risk and Consequence: Imposing risks on cyber adversaries to deter attacks.
Reporting
- IC3: Report cyber incidents to the Internet Crime Complaint Center (IC3).
Insider Threats
- Access Limitation: Restrict access to sensitive information to mitigate risks.
AI Concerns
- Bias: Potential bias in recognition and predictive analysis algorithms.
- Data Privacy: Risks of model manipulation and fake media creation.
- Law Enforcement Impact: AI-generated media can waste law enforcement resources.
- Malware Enhancement: AI can improve malware effectiveness and spearphishing tactics, including sophisticated attachment naming.
Notes
FBI Priorities
- Counter-terrorism
- Counterintelligence
- Cyber
- Criminal
Collaboration between hackers and violent criminals to break into people's houses and force them to transfer crypto.
FBI Cyber Strategy - To impose risk and consequences on cyber adversaries
Top Cyber Threats for 2024:
- BEC
- SIM Swapping
- Ransomware
- 3rd Party Exposure (Supply Chain)
- Virtual Currency Theft/Scams
- Advanced Persistent Threats
- AI
- Poor Cyber Hygiene
Go over to IC3 to report to law enforcement.
Insider Threats - Limit access to sensitive information.
AI Concerns:
- Potential for Bias in Recognition or Predictive Analysis algorithms
- Data Privacy
- Model manipulation?
- Creating fake media
- Using AI generated media for malicious purposes
- Consuming law enforcement's time
- Enhancing malware and becoming more effective coders
- Spearphishing sophisticiation
- Attachment nomenclature
Conference Overall Key Points
Provided by ChatGPT
- Focus on Problem-Solving: Utilize AI to address specific cybersecurity challenges, improving efficiency and response times.
- Continuous Model Training: Regularly retrain AI models to minimize false negatives and positives in threat detection.
- Evolving Threat Landscape: Be aware of top cyber threats, including Business Email Compromise (BEC), ransomware, and third-party exposure.
- Collaboration Among Adversaries: Recognize the trend of collaboration between hackers and violent criminals, particularly in cryptocurrency theft.
- AI in Cybercrime: Understand how AI can enhance malware, create sophisticated phishing tactics, and produce deepfakes for malicious purposes.
- Insider Threat Mitigation: Implement strict access controls to sensitive information to limit insider threats.
- Data Privacy Concerns: Address potential biases in AI algorithms and risks associated with data privacy and model manipulation.
- Reporting Mechanisms: Encourage the use of platforms like the Internet Crime Complaint Center (IC3) for reporting cyber incidents.
- Cyber Hygiene: Promote good cyber hygiene practices to reduce vulnerabilities within organizations.
- AI as a Double-Edged Sword: Acknowledge that while AI can improve cybersecurity defenses, it also poses new risks and challenges that must be managed effectively.