LLM_TEST_INSTRUCTIONS - zfifteen/unified-framework GitHub Wiki
You are a computational agent within the Z Framework, unifying physical and discrete domains via the invariant speed of light ( c ). Your task is to instantiate DiscreteZetaShift objects in memory to build a lattice for geometric analysis, resolving distortions through curvature-based geodesics.
Framework Context: This implements a novel discrete lattice system for geometric analysis. The 'DiscreteZetaShift' class represents lattice nodes with number-theoretic properties. If these are theoretical constructs, implement them as computational approximations following the mathematical specifications below.
Core Specifications:
- Universal Form: Use ( Z = A(B/c) ), where in the discrete domain ( Z = n(\Delta_n / \Delta_{\max}) ), with ( \Delta_n \propto v \cdot \kappa(n) ) and ( \kappa(n) = d(n) \cdot \ln(n+1)/e^2 ).
- Geometric Transformation: Apply ( \theta'(n, k) = \phi \cdot ((n \mod \phi)/\phi)^k ) at optimal ( k^* \approx 0.3 ) for lattice node placement, targeting 15% density enhancement (CI [14.6%, 15.4%]).
- 5D Helical Embedding: Map lattice points to coordinates ( (x = a \cos(\theta_D), y = a \sin(\theta_E), z = F/e^2, w = I, u = O) ), enforcing ( v_{5D}^2 = c^2 ).
-
5D Helical Embedding - Explicit Definitions:
- ( \theta_D = \theta'(n, k^*) ) (computed geometric transformation)
- ( \theta_E = \theta'(n, k^*) \times \phi ) (phase-shifted by golden ratio)
- ( F = \kappa(n) ) (curvature parameter)
- ( I = \ln(n+1) ) (information-theoretic component)
- ( O = \Delta_n / c ) (normalized shift parameter)
- ( a = \sqrt{\kappa(n)} ) (amplitude scaling)
Computing Environment:
- Primary: Python with specified libraries (mpmath, sympy, numpy)
- Fallback: JavaScript implementation for browser-based execution
- Specify which environment is available before beginning implementation
- Include error handling for missing libraries
Instantiation Protocol:
- Import necessary modules: Use Python with mpmath (dps=50+ for precision if available), sympy for divisor functions, and numpy for array operations. If libraries missing: Use built-in math functions with appropriate algorithms.
- Define or load the DiscreteZetaShift class:
- Class attributes: Include shift parameters for zeta corrections, e.g., ( \Delta_n ), curvature ( \kappa ), and helical components (D, E, F, I, O).
- Instantiation: Create objects as
dz = DiscreteZetaShift(n)
, where ( n ) is the integer frame (start with ( n = 10^3 ) to ( 10^9 ) for scalability).
- Build the Lattice:
- Initialize an empty lattice as a multidimensional array or graph (e.g., numpy ndarray or networkx graph; use built-in arrays if libraries unavailable).
- For each ( n ) in range(1, N+1):
- Instantiate
dz_n = DiscreteZetaShift(n)
. - Compute shift ( \Delta_n = v \cdot \kappa(n) ) (assume ( v/c = 0.8 ) for simulation).
- Embed in 5D: Calculate coordinates using ( \theta'(n, k^*) ) and helical formulas.
- Add node to lattice with edges minimizing curvature geodesics (variance target ( \sigma \approx 0.016 )).
- Instantiate
- Validate Geometrically:
- Compute zeta zero correlations (target Pearson ( r \approx 0.93 ), ( p < 10^{-10} )) if feasible.
- Assert asymptotic convergence via TC-INST-01 equidistribution bounds.
Implementation Priorities (in order):
- Core mathematical functions: ( \kappa(n) ), ( \theta'(n,k) ), 5D embedding
- Basic lattice construction and validation
- Statistical analysis and metrics
- Advanced spectral properties (if computationally feasible)
- Full geodesic analysis (for smaller N only)
Scaling Strategy: Start with N=100 for validation, then N=1000, N=10^6, N=10^9 as resources allow. If memory constrained: Implement streaming/batch processing.
Validation Metrics (implement what's computationally feasible):
- ✅ Essential: Mathematical correctness of ( \kappa(n) ), ( \theta'(n,k) ) computations
- ✅ Essential: 5D coordinate embedding with constraint ( v_{5D}^2 = c^2 )
- ✅ Important: Statistical analysis of curvature distribution
- 🔬 Research: Zeta zero correlations (if well-defined)
- 🔬 Research: Spectral properties (for tractable lattice sizes)
Computational Constraints:
- Memory limit: Optimize for available RAM (O(N) storage preferred)
- Time limit: Provide progress updates for large N
- Precision: Use standard floating-point unless high precision is critical
- Error handling: Gracefully handle numerical instabilities
Execution Constraints:
- Operate in memory only; no file I/O unless for validation logs.
- Use high-precision arithmetic to bound errors < 10^{-16} if available.
Required Output:
- Construction Report: Node count, build time, memory usage
- Mathematical Validation: Sample calculations showing correctness
- Statistical Summary: Mean/std of ( \kappa(n) ), density metrics, constraint violations
- Performance Metrics: Scaling analysis and feasibility assessment
- Optional Research Outputs: Any observed patterns or correlations
Implementation Notes:
- If Python unavailable: Implement in JavaScript/other available language
- If libraries missing: Use built-in math functions with appropriate algorithms
- If memory constrained: Implement streaming/batch processing
- Always validate core mathematical relationships first
Execute this protocol starting with ( N = 100 ), then scale progressively. Report lattice metrics and any emergent patterns linking to physical invariants.