Security and Performance Considerations - FeitianTech/postquantum-webauthn-platform GitHub Wiki
- Introduction
- Security Assumptions and Quantum Resistance
- Parameter Selection and Security Levels
- Side-Channel Resistance and Mitigations
- Performance Characteristics
- Network Bandwidth and Storage Impact
- Hybrid Cryptography Approaches
- Monitoring and Operational Considerations
- Future Migration Planning
- Implementation Best Practices
This document provides comprehensive analysis of the security and performance considerations for the post-quantum cryptography (PQC) implementation using ML-DSA (Module-Lattice Digital Signature Algorithm) in the WebAuthn platform. The implementation leverages the liboqs (Open Quantum Safe) library to provide quantum-resistant digital signatures while maintaining compatibility with existing WebAuthn infrastructure.
The platform supports three ML-DSA variants: ML-DSA-44, ML-DSA-65, and ML-DSA-87, each offering different security levels and performance characteristics. This document examines the security assumptions, performance implications, operational considerations, and migration strategies for deploying quantum-resistant authentication in production environments.
ML-DSA is specifically designed to resist attacks from both Shor's algorithm and Grover's algorithm, which represent the primary threats to classical cryptographic systems in a post-quantum world.
Shor's algorithm poses a threat to traditional public-key cryptosystems based on integer factorization (RSA) and elliptic curve discrete logarithm problems (ECC). ML-DSA achieves resistance through:
- Module lattice-based construction: The underlying mathematical problem is based on learning-with-errors (LWE) over module lattices, which has no known polynomial-time quantum algorithm
- Structured lattice security: Unlike traditional LWE instances, ML-DSA uses structured lattices that provide additional security guarantees
- Multiple security levels: The three parameter sets (44, 65, 87) offer increasing security margins against quantum attacks
Grover's algorithm provides quadratic speedup for unstructured search problems, effectively halving the classical security level of symmetric cryptographic primitives:
- Symmetric security preservation: ML-DSA maintains its claimed security levels despite Grover's algorithm by using sufficiently large key sizes
-
Parameter-specific security: Each variant provides different security margins:
- ML-DSA-44: ~128-bit security equivalent
- ML-DSA-65: ~192-bit security equivalent
- ML-DSA-87: ~256-bit security equivalent
The security of ML-DSA relies on the hardness of the module learning with errors (MLWE) problem, which is believed to be resistant to both classical and quantum attacks. The algorithm combines:
- Post-quantum hard problems: Module LWE over ideal lattices
- Classical cryptographic primitives: SHA-256 hash function for message digest
- Efficient implementation: Optimized arithmetic operations over polynomial rings
Section sources
- prebuilt_liboqs/linux-x86_64/include/oqs/sig_ml_dsa.h
- fido2/cose.py
The ML-DSA implementation provides three distinct parameter sets, each targeting different security requirements and performance constraints:
| Parameter Set | Public Key Size | Secret Key Size | Signature Size | Security Level |
|---|---|---|---|---|
| ML-DSA-44 | 1,312 bytes | 2,560 bytes | 2,420 bytes | ~128 bits |
| ML-DSA-65 | 1,952 bytes | 4,032 bytes | 3,309 bytes | ~192 bits |
| ML-DSA-87 | 2,592 bytes | 4,896 bytes | 4,627 bytes | ~256 bits |
- Use Cases: Resource-constrained environments, mobile devices, IoT applications
- Advantages: Smallest key/signature sizes, lowest computational overhead
- Limitations: Lower security margin, may not meet long-term security requirements
- Deployment: Suitable for applications with short-term security requirements
- Use Cases: General-purpose applications, enterprise deployments
- Advantages: Balanced security/performance ratio, good compatibility with existing infrastructure
- Limitations: Moderate resource requirements
- Deployment: Recommended for most production environments
- Use Cases: High-security applications, government/military systems
- Advantages: Maximum security margin, future-proof against advances in cryptanalysis
- Limitations: Largest resource requirements, highest computational cost
- Deployment: Reserved for mission-critical applications requiring long-term security
ML-DSA is part of the NIST Post-Quantum Cryptography Standardization process and has been selected as a finalist in the third round. The algorithm's security claims are based on:
- Original NIST submission: Each parameter set corresponds to specific NIST security levels
- Cryptanalytic analysis: Extensive peer review and analysis by the cryptographic community
- Implementation security: Liboqs provides constant-time implementations to prevent timing attacks
Section sources
- prebuilt_liboqs/linux-x86_64/include/oqs/sig_ml_dsa.h
- fido2/cose.py
The liboqs backend provides several layers of protection against side-channel attacks:
Liboqs implements ML-DSA with constant-time operations to prevent timing attacks:
- Memory access patterns: All memory accesses follow predictable patterns
- Conditional branches: Elimination of data-dependent branching
- Arithmetic operations: Constant-time modular arithmetic implementations
Modern processors provide hardware acceleration for ML-DSA operations:
- Vector instructions: AVX2/AVX-512 support for parallel polynomial operations
- AES-NI: Hardware-accelerated AES for key schedule operations
- Cache timing: Proper cache management to prevent cache-timing attacks
The liboqs build configuration includes several security-enhancing features:
flowchart TD
A["Liboqs Build"] --> B["Constant-Time Checks"]
A --> C["Hardware Acceleration"]
A --> D["Memory Protection"]
B --> E["Timing Attack Prevention"]
C --> F["Performance Optimization"]
D --> G["Side-Channel Resistance"]
E --> H["Secure Implementation"]
F --> H
G --> H
Diagram sources
- prebuilt_liboqs/linux-x86_64/include/oqs/oqsconfig.h
Proper memory management prevents leakage of sensitive data:
- Secure zeroing: Sensitive data is securely cleared from memory
- Stack protection: Stack-based allocations minimize heap usage
- Memory bounds checking: Prevents buffer overflow vulnerabilities
The platform implements runtime security checks:
- Algorithm availability: Verification of ML-DSA support in liboqs
- Fallback mechanisms: Graceful degradation to classical algorithms
- Error handling: Secure error reporting without information leakage
Section sources
- server/server/pqc.py
- prebuilt_liboqs/linux-x86_64/include/oqs/oqsconfig.h
ML-DSA performance characteristics vary significantly across the three parameter sets:
| Operation | ML-DSA-44 | ML-DSA-65 | ML-DSA-87 |
|---|---|---|---|
| Key Generation Time | ~10-20 ms | ~25-50 ms | ~40-80 ms |
| Memory Usage | ~5 MB | ~10 MB | ~15 MB |
| CPU Utilization | Low | Medium | High |
sequenceDiagram
participant Client as "Client Device"
participant LibOQS as "Liboqs Backend"
participant Crypto as "ML-DSA Engine"
Client->>LibOQS : Sign Request
LibOQS->>Crypto : Initialize Context
Crypto->>Crypto : Polynomial Arithmetic
Crypto->>Crypto : Hash Operations
Crypto->>LibOQS : Generate Signature
LibOQS->>Client : Return Signature
Note over Client,Crypto : Total latency : 50-200ms depending on parameter set
Diagram sources
- prebuilt_liboqs/linux-x86_64/include/oqs/sig.h
Verification operations are generally faster than signing:
- ML-DSA-44: ~5-10 ms verification time
- ML-DSA-65: ~10-20 ms verification time
- ML-DSA-87: ~15-25 ms verification time
| Metric | ECDSA P-256 | ECDSA P-384 | ECDSA P-521 | ML-DSA-44 | ML-DSA-65 | ML-DSA-87 |
|---|---|---|---|---|---|---|
| Key Generation | 1-2 ms | 2-5 ms | 5-10 ms | 10-20 ms | 25-50 ms | 40-80 ms |
| Signing | 0.5-1 ms | 1-2 ms | 2-5 ms | 50-150 ms | 80-200 ms | 120-300 ms |
| Verification | 0.5-1 ms | 1-2 ms | 2-5 ms | 5-10 ms | 10-20 ms | 15-25 ms |
| Key Size | 32 bytes | 48 bytes | 66 bytes | 1,312 bytes | 1,952 bytes | 2,592 bytes |
| Signature Size | 64 bytes | 96 bytes | 132 bytes | 2,420 bytes | 3,309 bytes | 4,627 bytes |
- CPU-bound operations: ML-DSA signing is computationally intensive
- Memory bandwidth: Large key/signature sizes require sufficient memory bandwidth
- Latency sensitivity: Real-time applications may be impacted by signing delays
Modern CPUs provide several optimization opportunities:
- Intel/AMD AVX-512: Vectorized polynomial arithmetic operations
- ARM NEON: SIMD instructions for polynomial multiplication
- GPU acceleration: Potential for batch signature operations
Different platforms exhibit varying performance characteristics:
- Server environments: High-performance CPUs with vector instruction support
- Mobile devices: Power-efficient implementations with reduced precision
- Embedded systems: Resource-constrained environments requiring optimization
Section sources
- tests/test_mldsa_registration_authentication.py
The increased key and signature sizes of ML-DSA have significant implications for network bandwidth:
flowchart LR
A["Client"] --> |Public Key<br/>1.3-2.6 KB| B["Server"]
B --> |Attestation Object<br/>2.4-4.6 KB| A
subgraph "Traditional ECDSA"
C1["Client"] --> |Public Key<br/>32-66 bytes| D["Server"]
D --> |Attestation Object<br/>64-132 bytes| C1
end
style A fill:#e1f5fe
style B fill:#f3e5f5
style C1 fill:#e8f5e8
style D fill:#f8f5e8
Diagram sources
- prebuilt_liboqs/linux-x86_64/include/oqs/sig_ml_dsa.h
| Component | ECDSA | ML-DSA-44 | ML-DSA-65 | ML-DSA-87 |
|---|---|---|---|---|
| Credential Storage | 1,024 bytes avg | 4,096 bytes avg | 6,144 bytes avg | 8,192 bytes avg |
| Metadata Storage | 512 bytes avg | 2,048 bytes avg | 3,072 bytes avg | 4,096 bytes avg |
| Total Per Credential | 1,536 bytes avg | 6,144 bytes avg | 9,216 bytes avg | 12,288 bytes avg |
- WebAuthn API: Requires support for larger credential objects
- Transport protocols: HTTP/HTTPS payloads increase significantly
- CDN considerations: Larger assets require optimized caching strategies
- Mobile networks: Higher bandwidth consumption on cellular connections
Several compression techniques can mitigate bandwidth impact:
CBOR encoding provides inherent compression benefits:
- Variable-length encoding: Reduces size of small integers
- Dictionary encoding: Compresses repeated field names
- Float compression: Efficient encoding of floating-point values
- Selective compression: Apply compression selectively to non-critical data
- Delta encoding: Compress consecutive credential updates
- Content negotiation: Adapt compression based on client capabilities
Optimized database schemas for ML-DSA credentials:
CREATE TABLE credentials (
id UUID PRIMARY KEY,
user_id UUID NOT NULL,
public_key BYTEA NOT NULL,
signature BYTEA NOT NULL,
algorithm VARCHAR(20) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE NOT NULL,
last_used TIMESTAMP WITH TIME ZONE,
storage_compressed BOOLEAN DEFAULT FALSE
);- Tiered storage: Different storage tiers for active/inactive credentials
- Compression ratios: Monitor and optimize compression effectiveness
- Retention policies: Implement appropriate retention for historical data
Section sources
- fido2/cose.py
While ML-DSA provides quantum resistance, hybrid approaches offer additional security layers during the transition period:
sequenceDiagram
participant Client as "Client Device"
participant Server as "Authentication Server"
participant Validator as "Signature Validator"
Client->>Server : Registration Request
Server->>Server : Generate ECDSA Key Pair
Server->>Server : Generate ML-DSA Key Pair
Server->>Client : Return Combined Credentials
Client->>Server : Authentication Request
Client->>Client : Sign with Both Keys
Client->>Server : Submit Dual Signatures
Server->>Validator : Verify ECDSA Signature
Server->>Validator : Verify ML-DSA Signature
Validator->>Server : Validation Results
Server->>Client : Authentication Result
Diagram sources
- server/server/pqc.py
The platform supports dynamic algorithm selection:
- Client capability detection: Identify supported algorithms
- Fallback mechanisms: Graceful degradation to classical algorithms
- Progressive migration: Gradual shift from classical to quantum-resistant algorithms
Hybrid systems require coordination with certificate authorities:
- Multi-algorithm certificates: Support for multiple signature algorithms
- Trust model evolution: Updated trust relationships for quantum-resistant certificates
- Metadata synchronization: Coordinated updates to certificate metadata
- Increased key management: More keys to manage and secure
- Validation overhead: Additional computational requirements
- Compatibility testing: Extensive testing across different algorithm combinations
Design for future algorithm transitions:
- Interface abstraction: Clean separation between algorithms and implementation
- Plugin architecture: Modular algorithm support
- Configuration flexibility: Dynamic algorithm selection based on requirements
Monitor evolving standards:
- NIST PQC updates: Stay current with new algorithm selections
- WebAuthn evolution: Adapt to new WebAuthn specification requirements
- Industry best practices: Adopt emerging best practices for quantum-resistant cryptography
Section sources
- server/server/routes/advanced.py
The platform implements comprehensive monitoring for PQC operations:
flowchart TD
A["Authentication Request"] --> B["Algorithm Detection"]
B --> C{"PQC Available?"}
C --> |Yes| D["ML-DSA Algorithm"]
C --> |No| E["Classical Algorithm"]
D --> F["Log PQC Usage"]
E --> G["Log Classical Usage"]
F --> H["Monitor Performance"]
G --> H
H --> I["Alert on Anomalies"]
Diagram sources
- server/server/pqc.py
Key metrics for PQC operation monitoring:
- Signing latency: Track signing operation duration
- Success rates: Monitor authentication success rates
- Resource utilization: CPU and memory usage during PQC operations
- Error rates: Track failures and exceptions
Monitor for potential security issues:
- Unexpected algorithm usage: Detect unauthorized algorithm changes
- Performance degradation: Identify potential side-channel attacks
- Resource exhaustion: Monitor for denial-of-service attacks
- Certificate validation failures: Track PQC certificate verification issues
Essential dashboard metrics:
| Metric Category | Specific Metrics |
|---|---|
| Algorithm Usage | PQC vs classical algorithm adoption rates |
| Performance | Signing latency, verification throughput |
| Health Status | Algorithm availability, error rates |
| Security Events | Unexpected usage patterns, validation failures |
Configure alerts for critical events:
- Algorithm unavailability: Immediate notification when PQC algorithms unavailable
- Performance degradation: Alerts when signing latency exceeds thresholds
- Security incidents: Automated response to detected anomalies
- Resource constraints: Notifications for high CPU/memory usage
The platform provides detailed logging for PQC operations:
- Algorithm selection: Log all algorithm choices during authentication
- Performance metrics: Record timing information for all operations
- Error conditions: Capture detailed error information for troubleshooting
- Security events: Log all security-relevant events
Maintain audit trails for compliance:
- Algorithm change history: Track all algorithm modifications
- Performance baseline: Historical performance data for trend analysis
- Security incident logs: Complete records of security events
- Operational changes: Documentation of all system modifications
Section sources
- server/server/device_logs.py
- server/server/attestation.py
Stay prepared for future algorithm standardization:
timeline
title PQC Algorithm Evolution
2022 : Round 3 Finalists
: ML-DSA, Falcon, SPHINCS+, others
2024 : NIST Standards
: Final algorithm selection
2025 : Deployment Phase
: Gradual migration plans
2030 : Legacy Removal
: Complete transition period
-
Assessment Phase (2024-2025)
- Evaluate current infrastructure readiness
- Identify critical systems requiring migration
- Develop migration timelines and resources
-
Parallel Operation Phase (2025-2027)
- Deploy hybrid systems supporting multiple algorithms
- Gradually increase quantum-resistant algorithm usage
- Monitor performance and compatibility issues
-
Full Migration Phase (2027-2030)
- Complete transition to quantum-resistant algorithms
- Decommission classical algorithm support
- Update all system components and dependencies
- Backward compatibility: Maintain support for classical algorithms during transition
- Testing rigor: Extensive testing of new algorithms in production-like environments
- Rollback capabilities: Ability to quickly revert to classical algorithms if needed
- Training programs: Educate operations teams on new technologies
- Algorithm validation: Complete testing of all NIST finalists
- Infrastructure updates: Prepare systems for quantum-resistant algorithms
- Staff training: Educate teams on new cryptographic concepts
- Production deployment: Begin rolling out quantum-resistant systems
- Performance optimization: Fine-tune implementations for production workloads
- Integration testing: Comprehensive testing with existing systems
- Complete migration: Full transition to quantum-resistant cryptography
- Legacy system cleanup: Remove outdated cryptographic implementations
- Continuous monitoring: Establish ongoing monitoring and maintenance processes
- Algorithm certification: Ensure all algorithms receive proper certification
- Vendor support: Maintain relationships with algorithm providers
- Community engagement: Participate in cryptographic communities and standards bodies
- Cross-platform compatibility: Test across different operating systems and architectures
- Third-party integration: Ensure compatibility with external systems and services
- Protocol evolution: Adapt to evolving WebAuthn and related standards
Section sources
- server/server/attestation.py
- Risk assessment: Evaluate security requirements against performance costs
- Future-proofing: Choose algorithms with strong long-term security prospects
- Compliance requirements: Ensure alignment with regulatory and industry standards
- Secure installation: Follow secure installation procedures for liboqs
- Access controls: Implement strict access controls for cryptographic materials
- Regular updates: Maintain current versions of cryptographic libraries
- Vulnerability management: Monitor and address security vulnerabilities promptly
- Hardware requirements: Ensure adequate CPU and memory resources
- Network optimization: Configure networks for increased bandwidth requirements
- Storage optimization: Use appropriate storage solutions for larger data volumes
- Load balancing: Distribute load appropriately across multiple servers
- Caching strategies: Implement intelligent caching for frequently accessed data
- Batch processing: Group operations to improve efficiency
- Resource pooling: Share resources across multiple operations
- Monitoring integration: Integrate with comprehensive monitoring systems
- Regular testing: Conduct regular testing of PQC operations
- Performance monitoring: Continuously monitor system performance
- Capacity planning: Plan for increased resource requirements
- Disaster recovery: Develop disaster recovery plans for quantum-resistant systems
- Knowledge transfer: Ensure team members understand quantum-resistant cryptography
- Standard operating procedures: Develop SOPs for PQC operations
- Incident response: Train teams for quantum-related security incidents
- Continuous learning: Encourage ongoing education in quantum-resistant technologies
- Standards adherence: Ensure compliance with relevant standards and regulations
- Audit preparation: Maintain documentation for regulatory audits
- Risk management: Implement comprehensive risk management processes
- Governance frameworks: Establish governance frameworks for quantum-resistant systems
- Change control: Implement rigorous change control processes
- Testing requirements: Ensure thorough testing of all changes
- Documentation requirements: Maintain comprehensive documentation
- Approval processes: Establish clear approval processes for changes
Section sources
- server/server/config.py
- pyproject.toml