User Manual - osok/hawkeye GitHub Wiki
Hidden Application Weaknesses & Key Entry-point Yielding Evaluator
- Introduction
- Installation
- Getting Started
- Command Reference
- Configuration
- Scanning Strategies
- Understanding Results
- Advanced Usage
- Best Practices
- Troubleshooting
- Examples
HawkEye is a specialized security reconnaissance tool designed to identify and assess Model Context Protocol (MCP) server deployments within network infrastructure. This manual provides comprehensive guidance for security professionals, system administrators, and compliance teams.
This major update introduces AI-powered dynamic threat analysis capabilities that revolutionize MCP security assessment:
- 🤖 Multi-Provider AI Integration - OpenAI GPT-4, Anthropic Claude, and Local LLM support
- 🔍 Dynamic Analysis - Real-time threat assessment for any MCP tool, not just hardcoded scenarios
- ⚡ Advanced Detection - 7 detection methods including Python-based MCP introspection
- 📊 Professional Reporting - Executive dashboards, compliance mapping, and detailed visualizations
- 🛡️ Enterprise Features - Cost optimization, intelligent caching, and scalable analysis
- 🌐 CIDR Support - Network-wide scanning and analysis capabilities
- Discovers MCP Servers: Identifies Node.js/NPX-based MCP implementations using 7 detection methods
- AI-Powered Analysis: Dynamic threat assessment using OpenAI, Anthropic, or Local LLMs
- Assesses Security Posture: Evaluates configurations, capabilities, and identifies vulnerabilities
- Generates Attack Vectors: AI-driven attack chain detection and exploitation scenarios
- Risk Prioritization: CVSS-based scoring with business impact assessment
- Comprehensive Reporting: Multi-format reports (HTML, JSON, CSV, XML) with visualizations
- Maintains Compliance: Operates within ethical and legal boundaries with audit trails
- Security analysts and penetration testers - AI-powered dynamic threat analysis
- System administrators managing MCP deployments - Comprehensive risk assessment
- Compliance officers conducting security audits - Automated compliance reporting
- DevOps teams implementing security controls - Continuous security monitoring
- AI/ML engineers assessing MCP tool security - Deep capability analysis
- Threat researchers studying MCP attack vectors - Advanced attack chain detection
- Security consultants - Professional-grade reporting and analysis
- Enterprise security teams - Scalable multi-target assessment
- Operating System: Linux, macOS, or Windows
- Python: Version 3.8 or higher
- Memory: 512MB RAM (2GB recommended for large scans, 4GB for AI analysis)
- Storage: 200MB for installation, additional space for logs and results
- Network: Access to target infrastructure and internet for AI provider APIs
- Permissions: Standard user privileges (no root required)
- Operating System: Linux (Ubuntu 20.04+ or CentOS 8+)
- Python: Version 3.9 or higher
- Memory: 8GB RAM (for AI analysis with large datasets)
- Storage: 2GB available space
- CPU: Multi-core processor for optimal parallel processing
- Network: Stable internet connection for AI provider APIs
- API Keys: OpenAI, Anthropic, or Local LLM endpoint access
git clone https://github.com/yourusername/hawkeye.git
cd hawkeye# Linux/macOS
python3 -m venv venv
source venv/bin/activate
# Windows
python -m venv venv
venv\Scripts\activatepip install -r requirements.txtpython application.py --help# Copy the environment template
cp env.example .env
# Edit .env with your API keys
# AI Provider Configuration
OPENAI_API_KEY=sk-proj-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-api03-your-anthropic-key-here
# Optional: Custom model configurations
OPENAI_MODEL=gpt-4
ANTHROPIC_MODEL=claude-3-sonnet-20240229# Test AI configuration
python demo_ai_threat_analysis.py# Build the container
docker build -t hawkeye .
# Run HawkEye
docker run -it hawkeye scan --target 192.168.1.0/24Start with comprehensive MCP detection of a single IP address:
# Basic comprehensive detection
python application.py detect comprehensive --target 192.168.1.100
# With AI-powered threat analysis (recommended)
python application.py detect comprehensive --target 192.168.1.100 -o detection.json
python application.py analyze-threats -i detection.json -f html -o report.htmlFor analyzing your local system:
# Detect local MCP servers
python application.py detect local -o local_results.json
# Analyze with AI
python application.py analyze-threats -i local_results.json -f html -o local_threats.htmlHawkEye provides real-time feedback during detection and analysis:
🦅 HawkEye v2.0 - MCP Security Reconnaissance
[INFO] Starting comprehensive detection of target: 192.168.1.100
[INFO] Detection methods: Process enumeration, config discovery, MCP introspection
[PROGRESS] ████████████████████ 100% (4/4 detection phases)
[FOUND] 192.168.1.100:3000 - MCP Server (Node.js)
[INTROSPECTION] Successfully connected to MCP server, discovered 12 tools
[RISK] Medium risk: Default configuration detected
[INFO] Detection completed in 45.2 seconds
[INFO] Results saved to: detection_results.json
🤖 AI Threat Analysis - Processing detection results
[INFO] Analyzing 3 MCP servers with 12 total tools
[AI] OpenAI GPT-4: Analyzing tool capabilities...
[AI] Generated 8 attack vectors, 4 mitigation strategies
[COST] Analysis cost: $0.24 (within budget)
[INFO] AI analysis completed in 23.7 seconds
[INFO] Threat report saved to: threat_analysis.html
- Plan Your Scan: Define target scope and objectives
- Execute Detection: Run HawkEye detection with appropriate parameters
- AI Threat Analysis: Process results through AI-powered analysis (optional but recommended)
- Analyze Results: Review findings, risk assessments, and AI-generated insights
- Generate Reports: Create comprehensive documentation for stakeholders
- Take Action: Implement remediation recommendations
# Step 1: Detect MCP servers (supports CIDR ranges)
python application.py detect comprehensive -t 192.168.1.0/24 -o detection.json
# Step 2: Analyze threats using AI
python application.py analyze-threats -i detection.json -f html -o threat_report.html
# Step 3: Review the comprehensive threat analysis report
# The HTML report includes attack vectors, risk assessments, and mitigation strategiesHawkEye now includes sophisticated AI-powered threat analysis capabilities that provide dynamic, context-aware security assessment of MCP tools and servers.
The AI threat analysis system uses advanced language models to:
- Analyze MCP Tool Capabilities: Dynamically assess the security implications of discovered MCP tools
- Generate Attack Vectors: Identify potential attack paths and exploitation scenarios
- Assess Risk Context: Consider deployment environment, security posture, and compliance requirements
- Provide Mitigation Strategies: Recommend specific security controls and remediation steps
- Learn from Patterns: Build threat intelligence from previous analyses
- OpenAI Integration: GPT-4 and GPT-3.5 models for threat analysis
- Anthropic Integration: Claude models for alternative AI perspectives
- Local LLM Support: Privacy-focused analysis using local language models
- Intelligent Failover: Automatic provider switching based on availability and performance
- Capability Categorization: Automatic classification of MCP tools by risk category
- Attack Chain Detection: Identification of multi-tool attack scenarios
- Context-Aware Assessment: Environment-specific threat modeling
- Confidence Scoring: Dynamic confidence assessment with 9-factor analysis
- Cost Optimization: Intelligent AI usage optimization to minimize costs
- Learning System: Pattern recognition from historical analyses
- Similarity Matching: Cost-effective analysis using similar tool patterns
- Threat Pattern Discovery: Automatic identification of common attack patterns
- Performance Optimization: Caching and optimization for large-scale analysis
To use AI-powered threat analysis, you'll need API keys for one or more AI providers:
-
OpenAI API Key (recommended)
- Sign up at https://platform.openai.com/
- Generate an API key with appropriate credits
-
Anthropic API Key (optional)
- Sign up at https://console.anthropic.com/
- Generate an API key for Claude models
-
Copy the environment template:
cp env.example .env
-
Edit the .env file with your API keys:
# AI Provider Configuration OPENAI_API_KEY=sk-proj-your-openai-key-here ANTHROPIC_API_KEY=sk-ant-api03-your-anthropic-key-here # Optional: Custom model configurations OPENAI_MODEL=gpt-4 ANTHROPIC_MODEL=claude-3-sonnet-20240229
-
Verify configuration:
python demo_ai_threat_analysis.py
-
Run the Demonstration Script
python demo_ai_threat_analysis.py
This script demonstrates:
- Individual MCP server analysis
- Batch processing of multiple servers
- Rule-based fallback when AI providers are unavailable
- Attack chain detection across tools
- Cost optimization strategies
-
Integrate with Detection Pipeline
# Comprehensive detection with AI analysis python application.py detect comprehensive --target 192.168.1.100 \ --enable-risk-assessment -
Batch Analysis of Multiple Servers The AI system can efficiently analyze multiple MCP servers:
- Intelligent caching reduces redundant analyses
- Similarity matching optimizes costs
- Parallel processing improves performance
Tool Capabilities Assessment:
{
"tool_name": "read_file",
"categories": ["file_system", "data_access"],
"risk_level": "high",
"confidence": 0.92,
"analysis_metadata": {
"ai_provider": "openai",
"model": "gpt-4",
"analysis_time": "2024-12-28T10:30:00Z"
}
}Threat Analysis:
{
"threat_level": "high",
"attack_vectors": [
{
"name": "Unauthorized File Access",
"description": "Tool can read sensitive system files",
"likelihood": "high",
"impact": "high",
"attack_steps": [
"Gain access to MCP server",
"Use read_file tool with sensitive paths",
"Extract confidential information"
]
}
],
"abuse_scenarios": [
{
"scenario": "Data Exfiltration",
"description": "Attacker uses file access to steal data",
"prerequisites": ["Server access", "Knowledge of file paths"],
"impact": "Confidentiality breach, regulatory violations"
}
]
}The AI system uses sophisticated risk scoring methodology:
- Threat Level: Critical, High, Medium, Low, Info
- Confidence Score: 0.0-1.0 indicating analysis confidence
- Context Factors: Environment, security posture, compliance requirements
- Attack Feasibility: Likelihood and impact of identified attack vectors
Advanced analysis includes multi-tool attack chain detection:
{
"attack_chains": [
{
"chain_id": "data-exfiltration-chain-1",
"tools": ["list_directory", "read_file", "web_request"],
"feasibility_score": 0.85,
"attack_path": [
"Use list_directory to discover sensitive files",
"Use read_file to access confidential data",
"Use web_request to exfiltrate data"
],
"risk_factors": {
"tool_availability": 1.0,
"access_requirements": 0.7,
"technical_complexity": 0.4,
"detection_difficulty": 0.9,
"impact_severity": 0.9
}
}
]
}The AI system includes several cost optimization strategies:
- Similarity-Based Analysis: Reuse analysis for similar tools
- Intelligent Caching: Cache results with appropriate TTL
- Provider Selection: Choose optimal AI provider based on cost and performance
- Batch Processing: Process multiple tools efficiently
Real-time monitoring of AI analysis performance:
# Performance metrics are included in analysis results
{
"performance_metrics": {
"analysis_duration": 12.5,
"tokens_used": 1250,
"cost_estimate": 0.025,
"cache_hit_rate": 0.65,
"provider_health": 0.95
}
}The system continuously monitors AI provider health:
- Response Time Tracking: Monitor API response times
- Success Rate Monitoring: Track successful vs. failed requests
- Error Rate Analysis: Identify patterns in API errors
- Automatic Failover: Switch providers when health degrades
AI analysis integrates seamlessly with existing HawkEye commands:
# Enable AI analysis in comprehensive detection
python application.py detect comprehensive --target 192.168.1.100 \
--enable-introspection \
--enable-risk-assessment \
--confidence-threshold 0.7
# Generate reports with AI analysis data
python application.py report generate \
--input analysis_results.json \
--format html \
--template ai-analysisFor advanced users, the AI analysis system can be integrated programmatically:
from hawkeye.detection.ai_threat import AIThreatAnalyzer
from hawkeye.detection.ai_threat.models import EnvironmentContext
# Initialize analyzer
analyzer = AIThreatAnalyzer()
# Create environment context
context = EnvironmentContext(
deployment_type="production",
security_posture="medium",
compliance_frameworks=["GDPR", "SOC2"]
)
# Analyze MCP server
result = analyzer.analyze_threats(mcp_server, context)- API Key Protection: Store API keys securely, never commit to version control
- Cost Monitoring: Set up budget alerts and monitoring for AI provider usage
- Result Validation: Always validate AI analysis results with human expertise
- Data Privacy: Consider data sensitivity when using cloud AI providers
- Batch Processing: Analyze multiple servers together for efficiency
- Caching Strategy: Configure appropriate cache TTL for your environment
- Provider Selection: Choose AI providers based on your cost and performance requirements
- Parallel Processing: Use parallel analysis for large-scale assessments
- Confidence Thresholds: Set appropriate confidence thresholds for your use case
- Multi-Provider Validation: Use multiple AI providers for critical analyses
- Historical Comparison: Compare results with previous analyses for consistency
- Expert Review: Have security experts review high-risk findings
Performs comprehensive network scanning to discover MCP services.
python application.py scan [OPTIONS]Required Options:
-
--target <CIDR|IP>: Target specification (IP address or CIDR range)
Optional Parameters:
-
--ports <range>: Port range to scan (default: 3000,8000,8080,9000) -
--threads <count>: Number of concurrent threads (default: 50) -
--timeout <seconds>: Connection timeout (default: 5) -
--tcp/--no-tcp: Enable/disable TCP scanning (default: enabled) -
--udp/--no-udp: Enable/disable UDP scanning (default: disabled) -
--output <path>: Output file path -
--format <json|csv|xml>: Output format (default: json)
Examples:
# Scan single IP
python application.py scan --target 192.168.1.100
# Scan CIDR range with custom ports
python application.py scan --target 192.168.1.0/24 --ports 3000-9000
# Scan with UDP enabled and custom threading
python application.py scan --target 10.0.0.0/16 --udp --threads 25Performs detailed MCP service analysis. This is a command group with multiple subcommands.
Subcommands:
-
detect target: Detect MCP servers on specified target -
detect local: Detect MCP servers on local system -
detect process: Analyze specific process for MCP indicators -
detect config: Discover MCP configuration files
python application.py detect target [OPTIONS]Required Options:
-
--target <IP>: Target IP address or hostname
Optional Parameters:
-
--ports <ports>: Port range or comma-separated ports (default: 3000,8000,8080,9000) -
--timeout <seconds>: Connection timeout (default: 10) -
--verify-protocol/--no-verify-protocol: Verify MCP protocol handshake (default: enabled) -
--detect-transport/--no-detect-transport: Detect transport layer (default: enabled) -
--output <path>: Output file path -
--format <json|csv|xml>: Output format (default: json)
Examples:
# Basic target detection
python application.py detect target --target 192.168.1.100
# Detection with custom ports
python application.py detect target --target example.com --ports 3000-3010python application.py detect local [OPTIONS]Optional Parameters:
-
--interface <interface>: Network interface to scan (default: auto-detect) -
--include-processes/--no-include-processes: Include process enumeration (default: enabled) -
--include-configs/--no-include-configs: Include config discovery (default: enabled) -
--include-docker/--no-include-docker: Include Docker inspection (default: enabled) -
--include-env/--no-include-env: Include environment analysis (default: enabled) -
--output <path>: Output file path -
--format <json|csv|xml>: Output format (default: json)
Examples:
# Full local detection
python application.py detect local
# Local detection without environment analysis
python application.py detect local --no-include-envpython application.py detect process [OPTIONS]Required Options:
-
--pid <PID>: Process ID to analyze
Optional Parameters:
-
--deep-analysis/--no-deep-analysis: Perform deep process analysis (default: enabled) -
--check-children/--no-check-children: Check child processes (default: enabled) -
--analyze-env/--no-analyze-env: Analyze environment variables (default: enabled) -
--output <path>: Output file path -
--format <json|csv|xml>: Output format (default: json)
Examples:
# Analyze specific process
python application.py detect process --pid 1234
# Basic process analysis without children
python application.py detect process --pid 5678 --no-check-childrenpython application.py detect config [OPTIONS]Optional Parameters:
-
--path <path>: Path to search (default: current directory) -
--recursive/--no-recursive: Search recursively (default: enabled) -
--include-hidden/--no-include-hidden: Include hidden files (default: disabled) -
--max-depth <depth>: Maximum directory depth (default: 5) -
--output <path>: Output file path -
--format <json|csv|xml>: Output format (default: json)
Examples:
# Discover configs in current directory
python application.py detect config
# Deep config search with hidden files
python application.py detect config --path /opt/mcp --include-hidden --max-depth 10Performs comprehensive MCP detection using the integrated detection pipeline with Python-based introspection. This command combines traditional detection methods with advanced MCP introspection for complete analysis. Supports CIDR notation for network-wide scanning.
python application.py detect comprehensive [OPTIONS]Required Options:
-
--target <IP|CIDR|hostname>: Target IP address, CIDR range, or hostname (e.g., 192.168.1.100, 192.168.1.0/24, example.com)
Optional Parameters:
-
--enable-introspection/--disable-introspection: Enable enhanced MCP introspection (default: enabled) -
--introspection-timeout <seconds>: Timeout for MCP introspection (default: 180) -
--enable-risk-assessment/--disable-risk-assessment: Enable risk assessment (default: enabled) -
--confidence-threshold <float>: Minimum confidence threshold (default: 0.3) -
--output <path>: Output file path for comprehensive results -
--format <json|csv|xml|html>: Output format (default: json) -
--generate-introspection-report/--no-introspection-report: Generate detailed introspection report -
--introspection-report-path <path>: Path for introspection report
Examples:
# Basic comprehensive detection (single target)
python application.py detect comprehensive --target 192.168.1.100
# Network-wide CIDR detection
python application.py detect comprehensive --target 192.168.1.0/24
# Full detection with risk analysis and reporting
python application.py detect comprehensive --target api.example.com \
--enable-risk-assessment \
--generate-introspection-report \
--format html
# Large network scan with custom settings
python application.py detect comprehensive --target 10.0.0.0/16 \
--confidence-threshold 0.8 \
--introspection-timeout 300 \
--output enterprise_scan.json
# CIDR scan with AI threat analysis workflow
python application.py detect comprehensive --target 192.168.1.0/24 --output results.json
python application.py analyze-threats -i results.json -f html -o threat_report.htmlDetection Methods Included:
- Process-based Detection: Node.js/NPX process enumeration
- Network Scanning: Port scanning with MCP protocol verification
- Configuration Discovery: MCP server configuration file analysis
- Docker Inspection: Container-based MCP server detection
- Environment Analysis: Environment variable and path analysis
- Transport Detection: stdio, HTTP, WebSocket, SSE transport identification
- Python-based Introspection: Direct MCP server communication and analysis
Output Features:
- Server Information: Name, version, protocol details
- Tool Inventory: Available tools with descriptions and schemas (521+ risk patterns)
- Resource Catalog: Accessible resources and their types
- Risk Analysis: Comprehensive security assessment with CWE mapping
- Attack Vector Analysis: Potential security vulnerabilities and attack paths
- Performance Metrics: Detection and introspection timing information
- Confidence Scoring: Analysis confidence levels for all findings
Performs introspection on multiple MCP servers concurrently for efficiency in large deployments.
python application.py detect introspect-batch [OPTIONS]Required Options:
-
--servers-file <path>: JSON file containing server configurations - OR
--targets <targets>: Comma-separated list of server addresses
Optional Parameters:
-
--max-concurrent <count>: Maximum concurrent introspections (default: 10) -
--timeout <seconds>: Per-server timeout (default: 30) -
--output <path>: Output file path -
--format <json|html|csv>: Output format (default: json) -
--continue-on-error/--stop-on-error: Error handling strategy (default: continue) -
--progress/--no-progress: Show progress indicators (default: enabled)
Server Configuration File Format:
{
"servers": [
{
"server_id": "production-mcp-1",
"target": "192.168.1.100:3000",
"transport": "stdio",
"timeout": 45
},
{
"server_id": "api-gateway",
"target": "api.example.com",
"transport": "http",
"timeout": 30
}
]
}Examples:
# Batch introspection from configuration file
python application.py detect introspect-batch --servers-file mcp_servers.json --format html
# Quick batch introspection of multiple targets
python application.py detect introspect-batch --targets "192.168.1.100,192.168.1.101,192.168.1.102"
# Large-scale batch with custom concurrency
python application.py detect introspect-batch --servers-file large_deployment.json --max-concurrent 20Generates formatted reports from scan results.
python application.py report [OPTIONS]Required Options:
-
--input <path>: Input scan results file
Optional Parameters:
-
--format <json|csv|xml|html>: Output format (default: html) -
--output <path>: Output file path -
--template <name>: Report template to use -
--risk-threshold <level>: Minimum risk level to include
Examples:
# Generate HTML report
python application.py report --input scan_results.json --format html
# Executive summary
python application.py report --input scan_results.json --template executiveScans the local system for MCP services.
python application.py scan-local [OPTIONS]Analyzes a specific process for MCP indicators.
python application.py analyze-process --pid <PID>Manages HawkEye configuration settings.
python application.py config [show|set|reset]HawkEye supports configuration files for persistent settings:
# hawkeye.yaml
scanning:
default_ports: [3000, 8000, 8080, 9000]
default_threads: 50
default_timeout: 5
rate_limit: 100
detection:
deep_inspection: false
protocol_verification: true
docker_inspection: true
reporting:
default_format: json
include_metadata: true
risk_threshold: 0.0
logging:
level: INFO
file: hawkeye.log
audit_trail: trueConfigure HawkEye using environment variables:
export HAWKEYE_THREADS=25
export HAWKEYE_TIMEOUT=10
export HAWKEYE_RATE_LIMIT=50
export HAWKEYE_LOG_LEVEL=DEBUG# Load configuration file
python application.py --config hawkeye.yaml scan --target 192.168.1.0/24
# Override specific settings
python application.py scan --target 192.168.1.0/24 --threads 100 --timeout 3For small networks, use aggressive scanning for comprehensive coverage:
python application.py scan --target 192.168.1.0/24 \
--ports 1-65535 \
--threads 100 \
--timeout 3For large networks, use conservative settings to avoid detection:
python application.py scan --target 10.0.0.0/16 \
--ports 3000,8000,8080,9000 \
--threads 25 \
--rate-limit 25 \
--timeout 10For maximum stealth, use minimal footprint settings:
python application.py scan --target 192.168.1.0/24 \
--threads 5 \
--rate-limit 5 \
--timeout 15 \
--random-delayFor production environments, prioritize stability:
python application.py scan --target production.network.com \
--threads 10 \
--rate-limit 10 \
--timeout 20 \
--retry 3 \
--exclude-critical-hoursHawkEye uses CVSS-based scoring with contextual adjustments:
- Critical (9.0-10.0): Immediate action required
- High (7.0-8.9): High priority remediation
- Medium (4.0-6.9): Moderate risk, plan remediation
- Low (0.1-3.9): Low risk, monitor
- Info (0.0): Informational findings
- Default credentials
- Weak authentication
- Insecure transport
- Excessive permissions
- Unencrypted communications
- Missing authentication
- Protocol version issues
- Insecure endpoints
- Public accessibility
- Development configurations in production
- Missing security headers
- Inadequate logging
The new Python-based MCP introspection system provides comprehensive analysis of discovered MCP servers. Understanding these results is crucial for effective security assessment.
Server Information Section:
{
"server_info": {
"server_id": "production-mcp-1",
"server_name": "File Management Server",
"server_version": "1.2.3",
"protocol_version": "2024-11-05",
"discovery_timestamp": "2024-12-28T10:30:00Z",
"transport_type": "stdio",
"overall_risk_level": "high"
}
}Tools Analysis:
{
"tools": [
{
"name": "read_file",
"description": "Read contents of a file",
"risk_level": "high",
"risk_categories": ["file_system", "data_access"],
"security_implications": [
"Potential for unauthorized file access",
"Risk of sensitive data exposure"
],
"parameters": [
{
"name": "path",
"type": "string",
"required": true,
"description": "File path to read"
}
]
}
]
}Risk Assessment Summary:
{
"risk_summary": {
"overall_risk": "high",
"cvss_score": 7.8,
"risk_factors": {
"file_system_access": true,
"network_access": true,
"code_execution": false,
"data_modification": true
},
"threat_vectors": [
"Unauthorized file system access",
"Data exfiltration via network tools",
"Configuration manipulation"
]
}
}File System Access (HIGH RISK)
- Tools that can read, write, or modify files
- Risk of unauthorized data access or system modification
- Examples:
read_file,write_file,list_directory
Network Access (HIGH RISK)
- Tools that can make external network connections
- Risk of data exfiltration or external system compromise
- Examples:
web_search,http_request,api_call
Code Execution (CRITICAL RISK)
- Tools that can execute arbitrary code or commands
- Maximum security risk requiring immediate attention
- Examples:
execute_command,run_script,eval_code
Data Access (MEDIUM RISK)
- Tools that can access databases or structured data
- Risk depends on sensitivity of accessible data
- Examples:
database_query,csv_read,json_parse
System Modification (HIGH RISK)
- Tools that can modify system configuration
- Risk of system compromise or denial of service
- Examples:
modify_config,install_package,restart_service
Authentication (MEDIUM RISK)
- Tools that handle authentication or credentials
- Risk of credential exposure or bypass
- Examples:
authenticate_user,store_credentials
Attack Vector Analysis:
- Direct Access: Tools accessible without authentication
- Privilege Escalation: Tools that can increase access levels
- Lateral Movement: Tools that can access other systems
- Data Exfiltration: Tools that can extract sensitive information
- System Disruption: Tools that can cause service interruption
Capability Mapping:
- Read Operations: What data can be accessed
- Write Operations: What can be modified or created
- Execute Operations: What commands or code can be run
- Network Operations: What external connections are possible
Immediate Actions (Critical/High Risk):
- Disable unnecessary tools with code execution capabilities
- Implement authentication for all tool access
- Restrict file system access to necessary directories only
- Monitor network connections from MCP servers
- Review tool permissions and implement least privilege
Medium-Term Actions (Medium Risk):
- Implement input validation for all tool parameters
- Add audit logging for all tool usage
- Set up monitoring for unusual activity patterns
- Create backup policies for systems with write access
- Document security controls and review regularly
Long-Term Actions (Low Risk/Informational):
- Regular security assessments using HawkEye introspection
- Security awareness training for MCP administrators
- Incident response procedures for MCP-related security events
- Policy development for MCP deployment and usage
The introspection system provides performance insights:
{
"performance_metrics": {
"introspection_duration": 12.5,
"tools_discovered": 15,
"resources_discovered": 8,
"transport_efficiency": 0.95,
"cache_hit_rate": 0.65,
"connection_success_rate": 1.0
}
}Key Performance Indicators:
- Introspection Duration: Time taken for complete analysis
- Discovery Success Rate: Percentage of successful component discoveries
- Transport Efficiency: Connection utilization effectiveness
- Cache Hit Rate: Efficiency of result caching
- Error Rate: Frequency of connection or protocol errors
CVSS-like Scoring (0.0-10.0):
- 9.0-10.0 (Critical): Immediate remediation required
- 7.0-8.9 (High): High priority, remediate within 24-48 hours
- 4.0-6.9 (Medium): Moderate priority, remediate within 1-2 weeks
- 0.1-3.9 (Low): Low priority, monitor and plan remediation
- 0.0 (Informational): No immediate security risk
Composite Scoring Factors:
- Tool Risk Level: Based on capability analysis
- Access Control: Authentication and authorization presence
- Network Exposure: Public accessibility and transport security
- Data Sensitivity: Type and classification of accessible data
- System Criticality: Importance of the affected system
- High-level risk assessment
- Key findings overview
- Remediation priorities
- Compliance status
- Detailed vulnerability descriptions
- Proof-of-concept information
- Technical remediation steps
- Reference materials
- Scan methodology
- Tool configuration
- Raw scan data
- Glossary of terms
Create custom port lists for specific environments:
# Web-focused scanning
python application.py scan --target 192.168.1.0/24 --ports 80,443,8080,8443
# Development environment scanning
python application.py scan --target 192.168.1.0/24 --ports 3000-3010,8000-8010Automate HawkEye for regular assessments:
#!/bin/bash
# daily_scan.sh
DATE=$(date +%Y%m%d)
TARGETS=("192.168.1.0/24" "10.0.1.0/24" "172.16.1.0/24")
for target in "${TARGETS[@]}"; do
python application.py scan \
--target "$target" \
--output "scan_${target//\//_}_${DATE}.json" \
--format json
done
# Generate consolidated report
python application.py report \
--input "scan_*_${DATE}.json" \
--format html \
--output "daily_report_${DATE}.html"Integrate HawkEye into continuous integration pipelines:
# .github/workflows/security-scan.yml
name: Security Scan
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install HawkEye
run: pip install -r requirements.txt
- name: Run Security Scan
run: |
python application.py scan \
--target ${{ secrets.SCAN_TARGET }} \
--output scan_results.json
- name: Generate Report
run: |
python application.py report \
--input scan_results.json \
--format html \
--output security_report.html
- name: Upload Results
uses: actions/upload-artifact@v2
with:
name: security-scan-results
path: |
scan_results.json
security_report.html- Obtain Authorization: Ensure written permission for all scanning activities
- Define Scope: Clearly identify target networks and exclusions
- Plan Timing: Schedule scans during maintenance windows when possible
- Prepare Documentation: Have incident response procedures ready
- Monitor Impact: Watch for network performance degradation
- Respect Rate Limits: Use conservative settings for production systems
- Maintain Logs: Keep detailed records of all activities
- Be Responsive: Be prepared to stop scanning if issues arise
- Secure Results: Protect scan data with appropriate access controls
- Validate Findings: Verify vulnerabilities before reporting
- Prioritize Remediation: Focus on highest-risk issues first
- Track Progress: Monitor remediation efforts over time
- Authorization: Never scan without explicit permission
- Scope Compliance: Stay within authorized boundaries
- Data Protection: Handle scan results as sensitive information
- Responsible Disclosure: Follow proper vulnerability disclosure procedures
Symptoms: Cannot bind to ports or access network interfaces Solution:
# Run with appropriate permissions
sudo python application.py scan --target 192.168.1.0/24
# Or use unprivileged ports only
python application.py scan --target 192.168.1.0/24 --source-port 32768-65535Symptoms: Many timeouts during scanning Solutions:
# Increase timeout values
python application.py scan --target 192.168.1.0/24 --timeout 15
# Reduce thread count
python application.py scan --target 192.168.1.0/24 --threads 10
# Add retry attempts
python application.py scan --target 192.168.1.0/24 --retry 3Symptoms: Scanning stops due to rate limiting Solutions:
# Reduce rate limit
python application.py scan --target 192.168.1.0/24 --rate-limit 25
# Increase delay between requests
python application.py scan --target 192.168.1.0/24 --delay 100Symptoms: High memory usage or out-of-memory errors Solutions:
# Reduce thread count
python application.py scan --target 192.168.1.0/24 --threads 25
# Scan smaller ranges
python application.py scan --target 192.168.1.0/25
python application.py scan --target 192.168.1.128/25Enable debug mode for detailed troubleshooting:
python application.py --debug scan --target 192.168.1.100Review logs for detailed error information:
# View recent logs
tail -f hawkeye.log
# Search for specific errors
grep "ERROR" hawkeye.log
# Analyze scan statistics
grep "STATS" hawkeye.logScenario: Assess a small office network for MCP services
# Initial discovery scan
python application.py scan --target 192.168.1.0/24 --output office_scan.json
# Generate executive report
python application.py report --input office_scan.json --format html --template executive
# Detailed analysis of discovered services
python application.py detect --target 192.168.1.100 --deep --output detailed_analysis.jsonScenario: Scan multiple enterprise network segments
# Create scan script
cat > enterprise_scan.sh << 'EOF'
#!/bin/bash
SEGMENTS=("10.1.0.0/16" "10.2.0.0/16" "10.3.0.0/16")
DATE=$(date +%Y%m%d_%H%M%S)
for segment in "${SEGMENTS[@]}"; do
echo "Scanning $segment..."
python application.py scan \
--target "$segment" \
--threads 50 \
--rate-limit 100 \
--timeout 10 \
--output "scan_${segment//\//_}_${DATE}.json" \
--format json
done
# Consolidate results
python application.py report \
--input "scan_*_${DATE}.json" \
--format html \
--template comprehensive \
--output "enterprise_report_${DATE}.html"
EOF
chmod +x enterprise_scan.sh
./enterprise_scan.shScenario: Regular compliance scanning for audit purposes
# Create compliance configuration
cat > compliance.yaml << 'EOF'
scanning:
default_ports: [3000, 8000, 8080, 9000]
default_threads: 25
default_timeout: 10
rate_limit: 50
detection:
deep_inspection: true
protocol_verification: true
compliance_checks: true
reporting:
default_format: html
template: compliance
include_metadata: true
risk_threshold: 4.0
logging:
level: INFO
audit_trail: true
compliance_logging: true
EOF
# Run compliance scan
python application.py --config compliance.yaml scan \
--target 192.168.0.0/16 \
--output compliance_scan_$(date +%Y%m%d).json
# Generate compliance report
python application.py report \
--input compliance_scan_$(date +%Y%m%d).json \
--format html \
--template compliance \
--output compliance_report_$(date +%Y%m%d).htmlScenario: Set up continuous monitoring for MCP services
# Create monitoring script
cat > monitor.sh << 'EOF'
#!/bin/bash
TARGETS_FILE="targets.txt"
BASELINE_DIR="baselines"
ALERTS_DIR="alerts"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BASELINE_DIR" "$ALERTS_DIR"
while IFS= read -r target; do
echo "Monitoring $target..."
# Current scan
python application.py scan \
--target "$target" \
--output "current_${target//\//_}.json" \
--format json
# Compare with baseline
if [ -f "$BASELINE_DIR/baseline_${target//\//_}.json" ]; then
python application.py compare \
--baseline "$BASELINE_DIR/baseline_${target//\//_}.json" \
--current "current_${target//\//_}.json" \
--output "$ALERTS_DIR/changes_${target//\//_}_${DATE}.json"
else
# Create baseline
cp "current_${target//\//_}.json" "$BASELINE_DIR/baseline_${target//\//_}.json"
fi
# Clean up
rm "current_${target//\//_}.json"
done < "$TARGETS_FILE"
EOF
# Create targets file
echo "192.168.1.0/24" > targets.txt
echo "10.0.1.0/24" >> targets.txt
# Set up cron job for hourly monitoring
echo "0 * * * * /path/to/monitor.sh" | crontab -Scenario: Comprehensive security analysis of discovered MCP servers using the new Python-based introspection system
# Basic MCP server introspection
python application.py detect introspect \
--target 192.168.1.100:3000 \
--server-id production-mcp-1 \
--risk-analysis \
--format html \
--output mcp_introspection_report.html
# Introspection with specific transport
python application.py detect introspect \
--target localhost:3000 \
--transport stdio \
--server-id local-dev-server \
--timeout 60 \
--format json
# Detailed introspection with caching disabled for fresh analysis
python application.py detect introspect \
--target api.example.com \
--server-id api-gateway \
--no-caching \
--concurrent-limit 1 \
--format markdown \
--output detailed_analysis.mdScenario: Analyze multiple MCP servers across the enterprise infrastructure
# Create server configuration file
cat > mcp_servers.json << 'EOF'
{
"servers": [
{
"server_id": "production-fileserver",
"target": "192.168.1.100:3000",
"transport": "stdio",
"timeout": 45
},
{
"server_id": "api-gateway",
"target": "api.internal.company.com",
"transport": "http",
"timeout": 30
},
{
"server_id": "development-server",
"target": "dev.company.com:8080",
"transport": "sse",
"timeout": 60
},
{
"server_id": "data-processor",
"target": "192.168.2.50:9000",
"transport": "stdio",
"timeout": 40
}
]
}
EOF
# Run batch introspection
python application.py detect introspect-batch \
--servers-file mcp_servers.json \
--max-concurrent 5 \
--format html \
--output batch_introspection_report.html \
--progress
# Alternative: Quick batch introspection from command line
python application.py detect introspect-batch \
--targets "192.168.1.100,192.168.1.101,192.168.1.102" \
--max-concurrent 3 \
--format csv \
--output quick_batch_results.csvScenario: Security assessment focusing on high-risk tools and capabilities
# High-security introspection with detailed risk analysis
python application.py detect introspect \
--target secure.company.com \
--server-id security-critical-server \
--risk-analysis \
--timeout 90 \
--max-retries 5 \
--format json \
--output security_assessment.json
# Parse and analyze security results
cat security_assessment.json | jq '.risk_summary.threat_vectors'
cat security_assessment.json | jq '.tools[] | select(.risk_level == "critical" or .risk_level == "high")'
# Generate security-focused report
python application.py report \
--input security_assessment.json \
--format html \
--template security \
--risk-threshold 7.0 \
--output security_report.htmlScenario: Monitor introspection performance and optimize for large deployments
# Performance-optimized introspection
python application.py detect introspect \
--target large-deployment.company.com \
--server-id large-scale-server \
--enable-caching \
--cache-ttl 600 \
--concurrent-limit 15 \
--format json \
--output performance_test.json
# Extract performance metrics
cat performance_test.json | jq '.performance_metrics'
# Batch introspection with performance monitoring
python application.py detect introspect-batch \
--servers-file large_servers.json \
--max-concurrent 20 \
--format json \
--output performance_batch.json \
--progress
# Analyze batch performance
cat performance_batch.json | jq '.performance_summary'Scenario: Create an automated pipeline for regular MCP security assessment
# Create automated introspection script
cat > automated_introspection.sh << 'EOF'
#!/bin/bash
# Configuration
DATE=$(date +%Y%m%d_%H%M%S)
SERVERS_CONFIG="mcp_servers.json"
OUTPUT_DIR="introspection_results"
REPORT_DIR="reports"
ARCHIVE_DIR="archive"
# Create directories
mkdir -p "$OUTPUT_DIR" "$REPORT_DIR" "$ARCHIVE_DIR"
echo "🦅 HawkEye Automated MCP Introspection - $DATE"
# Step 1: Discovery scan to find new MCP servers
echo "Step 1: Network discovery..."
python application.py scan \
--target 192.168.0.0/16 \
--ports 3000,8000,8080,9000 \
--output "$OUTPUT_DIR/discovery_$DATE.json"
# Step 2: Extract MCP servers from discovery results
echo "Step 2: Processing discovery results..."
# (This would typically involve parsing discovery results to update servers config)
# Step 3: Comprehensive introspection
echo "Step 3: MCP server introspection..."
python application.py detect introspect-batch \
--servers-file "$SERVERS_CONFIG" \
--max-concurrent 10 \
--format json \
--output "$OUTPUT_DIR/introspection_$DATE.json" \
--progress
# Step 4: Generate reports
echo "Step 4: Generating reports..."
# Executive summary
python application.py report \
--input "$OUTPUT_DIR/introspection_$DATE.json" \
--format html \
--template executive \
--output "$REPORT_DIR/executive_summary_$DATE.html"
# Technical report
python application.py report \
--input "$OUTPUT_DIR/introspection_$DATE.json" \
--format html \
--template technical \
--output "$REPORT_DIR/technical_report_$DATE.html"
# Security-focused report
python application.py report \
--input "$OUTPUT_DIR/introspection_$DATE.json" \
--format html \
--template security \
--risk-threshold 4.0 \
--output "$REPORT_DIR/security_assessment_$DATE.html"
# CSV export for analysis
python application.py report \
--input "$OUTPUT_DIR/introspection_$DATE.json" \
--format csv \
--output "$REPORT_DIR/data_export_$DATE.csv"
# Step 5: Security analysis
echo "Step 5: Security analysis..."
# Extract high-risk findings
jq '.servers[] | select(.overall_risk_level == "critical" or .overall_risk_level == "high")' \
"$OUTPUT_DIR/introspection_$DATE.json" > "$OUTPUT_DIR/high_risk_servers_$DATE.json"
# Count findings by risk level
echo "Risk Level Summary:"
jq -r '.servers[].overall_risk_level' "$OUTPUT_DIR/introspection_$DATE.json" | sort | uniq -c
# Step 6: Archive old results
echo "Step 6: Archiving old results..."
find "$OUTPUT_DIR" -name "*.json" -mtime +30 -exec mv {} "$ARCHIVE_DIR/" \;
find "$REPORT_DIR" -name "*.html" -mtime +30 -exec mv {} "$ARCHIVE_DIR/" \;
echo "✅ Automated introspection completed successfully!"
echo "📊 Reports available in: $REPORT_DIR"
echo "📁 Raw data available in: $OUTPUT_DIR"
# Send notification (optional)
if command -v mail &> /dev/null; then
echo "MCP introspection completed for $DATE. Reports available at $REPORT_DIR" | \
mail -s "HawkEye MCP Introspection Report - $DATE" [email protected]
fi
EOF
# Make script executable
chmod +x automated_introspection.sh
# Set up daily automated introspection
echo "0 6 * * * /path/to/automated_introspection.sh" | crontab -
# Run manually
./automated_introspection.shScenario: Integrate MCP introspection results with other security tools and workflows
# Export to SIEM format
python application.py detect introspect \
--target critical.company.com \
--server-id critical-server \
--format json \
--output introspection_results.json
# Convert to SIEM-compatible format
jq '.tools[] | {
timestamp: .discovery_timestamp,
source: "hawkeye-mcp",
severity: .risk_level,
category: .risk_categories[0],
message: ("MCP tool " + .name + " has " + .risk_level + " risk"),
tool_name: .name,
description: .description,
server_id: "critical-server"
}' introspection_results.json > siem_events.json
# Send to vulnerability management system
# (Example integration - adapt to your VM system API)
curl -X POST https://vm.company.com/api/vulnerabilities \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $VM_API_TOKEN" \
-d @introspection_results.json
# Generate tickets for high-risk findings
jq -r '.tools[] | select(.risk_level == "critical" or .risk_level == "high") |
"Title: Critical MCP Tool: " + .name + "\n" +
"Description: " + .description + "\n" +
"Risk Level: " + .risk_level + "\n" +
"Categories: " + (.risk_categories | join(", ")) + "\n" +
"Server: critical-server\n\n"' introspection_results.json > security_tickets.txt
# Integration with Slack notifications
SLACK_WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
CRITICAL_COUNT=$(jq '[.tools[] | select(.risk_level == "critical")] | length' introspection_results.json)
HIGH_COUNT=$(jq '[.tools[] | select(.risk_level == "high")] | length' introspection_results.json)
if [ "$CRITICAL_COUNT" -gt 0 ] || [ "$HIGH_COUNT" -gt 0 ]; then
curl -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-type: application/json' \
--data "{
\"text\": \"🚨 HawkEye MCP Security Alert\",
\"attachments\": [{
\"color\": \"danger\",
\"fields\": [
{\"title\": \"Critical Risk Tools\", \"value\": \"$CRITICAL_COUNT\", \"short\": true},
{\"title\": \"High Risk Tools\", \"value\": \"$HIGH_COUNT\", \"short\": true},
{\"title\": \"Server\", \"value\": \"critical-server\", \"short\": true}
]
}]
}"
fiHawkEye provides comprehensive MCP security assessment capabilities for organizations of all sizes. By following the guidelines in this manual, security professionals can effectively identify and assess MCP-related security risks while maintaining operational responsibility and compliance.
For additional support, consult the troubleshooting guide or contact the HawkEye support team.
Document Version: 1.0
Last Updated: Current Version
Next Review: Quarterly
The analyze-threats command provides a production-ready CLI interface for processing JSON detection results through the AI-powered threat analysis system. This replaces the demo-only approach with a complete workflow integration.
python application.py analyze-threats [OPTIONS]| Parameter | Description | Example |
|---|---|---|
--input, -i PATH |
Input JSON file containing detection results | --input detection_results.json |
| Parameter | Description | Default | Example |
|---|---|---|---|
--output, -o PATH |
Output file path for threat analysis results | None | --output threat_analysis.json |
--format, -f FORMAT |
Output format (json, html, csv, xml) | json | --format html |
--analysis-type TYPE |
Analysis depth (quick, comprehensive, detailed) | comprehensive | --analysis-type detailed |
--confidence-threshold FLOAT |
Minimum confidence threshold for analysis | 0.5 | --confidence-threshold 0.8 |
--enable-ai/--disable-ai |
Enable AI-powered analysis | Enabled | --disable-ai |
--parallel-processing/--sequential-processing |
Enable parallel processing | Enabled | --sequential-processing |
--max-workers INTEGER |
Maximum number of parallel workers | 3 | --max-workers 5 |
--cost-limit FLOAT |
Maximum cost limit for AI analysis (USD) | No limit | --cost-limit 10.0 |
Basic Workflow:
# Step 1: Run detection and save to JSON
python application.py detect target --target 192.168.1.100 --output detection_results.json
# Step 2: Analyze threats from detection results
python application.py analyze-threats --input detection_results.json --output threat_analysis.jsonComprehensive Analysis:
# Step 1: Comprehensive detection with introspection
python application.py detect comprehensive --target api.example.com --output comprehensive_results.json
# Step 2: Detailed AI threat analysis with custom settings
python application.py analyze-threats \
--input comprehensive_results.json \
--output detailed_threats.json \
--analysis-type detailed \
--parallel-processing \
--max-workers 5 \
--cost-limit 10.0HTML Report Generation:
# Generate HTML threat analysis report
python application.py analyze-threats \
--input detection_results.json \
--format html \
--output security_report.html \
--analysis-type comprehensiveBatch Processing Multiple Environments:
# Local environment analysis
python application.py detect local --output local_detection.json
python application.py analyze-threats \
--input local_detection.json \
--output local_threats.json \
--confidence-threshold 0.6
# Production environment analysis with cost controls
python application.py detect target --target prod.company.com --output prod_detection.json
python application.py analyze-threats \
--input prod_detection.json \
--output prod_threats.json \
--cost-limit 5.0 \
--analysis-type comprehensiveJSON Format (Default): The JSON output includes comprehensive threat analysis data:
{
"metadata": {
"title": "HawkEye AI Security Threat Analysis",
"source_file": "detection_results.json",
"analysis_type": "comprehensive",
"generated_at": "2024-12-28T12:00:00Z",
"total_servers_analyzed": 2,
"successful_analyses": 2,
"failed_analyses": 0,
"ai_enabled": true,
"parallel_processing": true
},
"threat_analyses": {
"filesystem-mcp-server": {
"tool_capabilities": {
"tool_name": "Filesystem MCP Server",
"capability_categories": ["file_system", "data_access"],
"risk_score": 7.8,
"confidence": 0.95
},
"threat_level": "high",
"attack_vectors": [
{
"name": "Unauthorized File Access",
"severity": "high",
"description": "Server allows unrestricted file system access",
"impact": "Complete file system compromise",
"likelihood": 0.85,
"prerequisites": ["Server access", "Tool permissions"],
"attack_steps": [
"Gain access to MCP server",
"Use read_file tool with sensitive paths",
"Extract confidential information"
]
}
],
"mitigation_strategies": [
{
"name": "Implement File Access Controls",
"description": "Restrict file access to specific directories",
"implementation_steps": [
"Configure directory whitelist",
"Implement path validation",
"Add audit logging"
],
"effectiveness_score": 0.9,
"cost_estimate": "medium"
}
],
"confidence_score": 0.95,
"analysis_metadata": {
"provider": "anthropic",
"model": "claude-3-sonnet-20240229",
"cost": 0.0156,
"analysis_time": 12.5,
"timestamp": "2024-12-28T12:05:30Z"
}
}
},
"errors": {},
"statistics": {
"analyses_performed": 2,
"cache_hits": 0,
"total_cost": 0.0312
}
}HTML Format: Generates a comprehensive HTML report with:
- Executive summary with risk overview
- Detailed threat analysis for each MCP server
- Attack vector visualization
- Mitigation strategy recommendations
- Interactive risk charts and graphs
CSV Format: Produces a tabular format suitable for spreadsheet analysis:
tool_name,threat_level,attack_vectors_count,mitigations_count,confidence_score,analysis_cost
Filesystem MCP Server,high,3,2,0.95,0.0156
Web Search MCP Server,medium,2,3,0.88,0.0156XML Format: Structured XML output for integration with other security tools and systems.
Common Error Scenarios:
-
Invalid JSON Input File:
Error: Invalid JSON format in input file: Expecting ',' delimiter: line 5 column 10
Solution: Validate the JSON file format using
jqor a JSON validator. -
No MCP Servers Found:
No MCP servers found above confidence threshold 0.5
Solution: Lower the confidence threshold or verify the detection results contain valid MCP server data.
-
AI API Configuration Issues:
Warning: No AI API keys configured! Falling back to rule-based analysis...Solution: Configure API keys in the
.envfile:AI_PROVIDER=anthropic AI_ANTHROPIC_API_KEY=your_key_here AI_OPENAI_API_KEY=your_key_here
-
Cost Limit Exceeded:
Analysis stopped: Cost limit of $5.00 exceededSolution: Increase cost limit or process fewer servers at once.
Parallel Processing:
- Use
--parallel-processingfor multiple servers (default: enabled) - Adjust
--max-workersbased on system resources and API rate limits - Monitor API rate limits to avoid throttling
Cost Optimization:
- Set
--cost-limitto control AI usage costs - Use
--confidence-thresholdto filter low-confidence detections - Choose
--analysis-type quickfor basic analysis to reduce costs
Memory and Performance:
- For large-scale analysis, process servers in batches
- Use
--sequential-processingif experiencing memory issues - Monitor system resources during analysis
CI/CD Pipeline Integration:
#!/bin/bash
# ci-security-check.sh
# Run detection
python application.py detect target --target $CI_TARGET --output detection.json
# Analyze threats with cost controls
python application.py analyze-threats \
--input detection.json \
--output threats.json \
--cost-limit 2.0 \
--confidence-threshold 0.7
# Check for high-risk findings
HIGH_RISK=$(jq '[.threat_analyses[] | select(.threat_level == "high" or .threat_level == "critical")] | length' threats.json)
if [ "$HIGH_RISK" -gt 0 ]; then
echo "❌ Security check failed: $HIGH_RISK high-risk threats detected"
exit 1
else
echo "✅ Security check passed"
exit 0
fiAutomated Security Monitoring:
#!/bin/bash
# daily-security-monitor.sh
DATE=$(date +%Y%m%d)
TARGETS=("prod.company.com" "api.company.com" "staging.company.com")
for target in "${TARGETS[@]}"; do
# Detection
python application.py detect target --target "$target" --output "detection_${target}_${DATE}.json"
# Threat analysis
python application.py analyze-threats \
--input "detection_${target}_${DATE}.json" \
--output "threats_${target}_${DATE}.json" \
--format json \
--cost-limit 5.0
# Generate HTML report
python application.py analyze-threats \
--input "detection_${target}_${DATE}.json" \
--output "report_${target}_${DATE}.html" \
--format html
done
# Send notification if high-risk threats found
python send_security_alerts.py --date "$DATE"Security Dashboard Integration:
# Export threat data to security dashboard
python application.py analyze-threats \
--input detection_results.json \
--format json \
--output dashboard_data.json
# Transform for dashboard API
jq '.threat_analyses | to_entries | map({
server_name: .key,
threat_level: .value.threat_level,
risk_score: .value.tool_capabilities.risk_score,
attack_vectors: (.value.attack_vectors | length),
timestamp: .value.analysis_metadata.timestamp
})' dashboard_data.json > dashboard_feed.json
# Send to dashboard API
curl -X POST https://dashboard.company.com/api/security-threats \
-H "Content-Type: application/json" \
--data @dashboard_feed.jsonSecurity Considerations:
- Protect API Keys: Store API keys securely, never commit to repositories
- Validate Results: Always review AI analysis results with human expertise
- Monitor Costs: Set up alerts for AI usage costs across all environments
- Audit Trails: Maintain logs of all threat analysis activities
Operational Excellence:
- Regular Analysis: Schedule daily or weekly automated threat analysis
- Baseline Comparisons: Compare current results with historical baselines
- Escalation Procedures: Define clear escalation paths for high-risk findings
- Documentation: Document all security findings and remediation actions
Performance Guidelines:
- Batch Processing: Analyze multiple servers together for efficiency
- Resource Management: Monitor system resources during large-scale analysis
- API Rate Limits: Respect AI provider rate limits and quotas
- Caching Strategy: Configure appropriate cache settings for your environment
Multi-Environment Security Assessment:
# Development environment
python application.py detect local --output dev_detection.json
python application.py analyze-threats \
--input dev_detection.json \
--output dev_threats.json \
--analysis-type quick \
--cost-limit 1.0
# Staging environment
python application.py detect target --target staging.company.com --output staging_detection.json
python application.py analyze-threats \
--input staging_detection.json \
--output staging_threats.json \
--analysis-type comprehensive \
--cost-limit 3.0
# Production environment (detailed analysis)
python application.py detect comprehensive --target prod.company.com --output prod_detection.json
python application.py analyze-threats \
--input prod_detection.json \
--output prod_threats.json \
--analysis-type detailed \
--cost-limit 10.0 \
--parallel-processing \
--max-workers 5Compliance and Audit Support:
# Generate compliance-ready reports
python application.py analyze-threats \
--input audit_detection.json \
--output compliance_analysis.xml \
--format xml \
--analysis-type detailed \
--confidence-threshold 0.9
# Create executive summary for stakeholders
python application.py analyze-threats \
--input audit_detection.json \
--output executive_summary.html \
--format html \
--analysis-type comprehensiveThis comprehensive CLI integration provides a production-ready workflow that replaces the previous demo-only approach with full enterprise capabilities for AI-powered MCP security analysis.