SYSTEM_REGISTER - mark-ik/graphshell GitHub Wiki
Doc role: Canonical hub / index for the Register layer and its component specs; subordinate to the top-level system architecture.
Status: Active / canonical (maintained; historical CP1/CP2 implementation details preserved below)
Short label: system_register
Primary runtime types: RegistryRuntime, ControlPanel
Related docs:
- ../system_architecture_spec.md (top-level system architecture parent spec)
- ../register_layer_spec.md (canonical Register layer spec)
-
../registry_runtime_spec.md (
RegistryRuntimecomponent spec) -
../control_panel_spec.md (
ControlPanelcomponent spec) -
../signal_bus_spec.md (
SignalBus/ signal-routing component spec) -
../../../../verso_docs/implementation_strategy/coop_session_spec.md (
Co-ophost-led co-presence contract; distinct from device sync) - archived embedder decomposition plan (embedder decomposition context)
- ../2026-02-21_lifecycle_intent_model.md (intent schema and reducer boundary)
- ../2026-02-22_registry_layer_plan.md (registry architecture and provider wiring)
- ../../PLANNING_REGISTER.md (cross-subsystem sequencing / backlog)
Policy authority: This file is the canonical policy authority for Register hub rules and register-spec family coordination. Policy in this file should be distilled from canonical specs and accepted research conclusions.
- Family-coordination policy: Register-wide routing and boundary policy is coordinated here; component-local semantics remain in component specs.
- Authority-routing policy: Direct call vs signal vs intent routing must follow the canonical decision table and authority boundaries.
- Misroute-visibility policy: Boundary misroutes and no-op fallthroughs should surface during development via explicit warnings/diagnostics.
- No-catch-all policy: This hub does not replace subsystem or component authority docs; it reconciles them.
- Convergence policy: Transitional routing patterns must converge toward explicit typed signal and provider-routed contracts.
- Shared-carrier policy: When multiple surfaces need the same semantics (settings routing, Navigator refresh, lens invalidation, viewer health, workbench projection), prefer one register-owned route or signal path over feature-local observer stacks.
The Register layer is split into separate canonical specs:
- ../register_layer_spec.md
- ../registry_runtime_spec.md
- ../control_panel_spec.md
- ../signal_bus_spec.md
This hub remains the navigation/index surface and historical implementation guide for Register-local material.
The Control Panel is the async adapter layer that allows concurrent background producers โ network device sync, prefetch scheduler, memory monitor, mod loader lifecycle โ to feed intents into the synchronous two-phase reducer without compromising determinism or testability.
The Control Panel is a core component of The Register (implemented today as RegistryRuntime + ControlPanel + an explicit SignalBus trait backed by SignalRoutingLayer).
Key principle: The reducer stays 100% synchronous and testable. All I/O and background work happens in supervised tokio tasks that communicate exclusively via the current intent queue. This is an async adapter layer around a deterministic sync core, not an async rewrite.
- Device Sync: durable workspace replication between trusted devices (remote delta carrier intents, peer status, version-vector convergence).
- Co-op: collaborative/co-presence behavior (live follow/presence/shared browsing context) and not implied by Device Sync.
- UI and docs should use explicit labels (
Sync Devices,Start Co-op) instead of plainSyncwhen both concepts are present.
This hub document exists to keep the Register architecture current without forcing all runtime coordination material into subsystem docs.
In scope:
- Register composition boundary (
RegistryRuntime,ControlPanel, signal/event routing layer) - Async worker supervision and intent ingress policy
- Cross-registry event distribution strategy (
SignalBusor equivalent) - Ownership boundaries between registries, mods, subsystems, and runtime coordinators
Out of scope:
- Subsystem-specific contracts/validation (see subsystem guides)
- Feature-specific workers (covered in their feature/subsystem docs except for Register-facing integration points)
Implemented:
-
ControlPanelasync worker supervision and multi-producer intent queueing - Main GUI integration for control-panel worker lifecycle and intent draining
- RegistryRuntime composition root for atomic/domain registries and mod/runtime services
- Shared
RegistryRuntimeauthority across GUI-owned runtime state andphase3_*helper paths - RegistryRuntime provider-wired phase0 protocol/viewer dispatch paths and diagnostics coverage
- Runtime-owned content-pipeline completion: URI-aware protocol MIME inference, cancellable content-type probes, viewer capability description, viewer-surface profile resolution, and content-aware lens composition
- Folded viewer/surface capability-conformance declarations with runtime diagnostics inspection hooks
- Runtime-owned canvas, physics, layout-domain, and presentation-domain profile resolution paths
- Runtime-owned diagnostics, knowledge, and index authorities with semantic lifecycle signaling and omnibox submit-path search fanout
- Runtime-owned theme activation, built-in theme descriptors, and tokenized command/radial surfaces
- Register-owned
AgentRegistrywithControlPanelsupervision and built-inagent:tag_suggestersignal-driven suggestion ingress - Shell-owned omnibar provider suggestion fetches now route through
ControlPanel-supervised host requests and return to the frame loop through an explicit mailbox instead of detached toolbar threads
Cross-system leverage already available:
-
ActionRegistry+ runtime dispatch can be the common execution path for top chrome, command palette, workbench chrome, and settings launchers. -
SignalBus/SignalRoutingLayercan fan out graph/lens/workflow/viewer state changes to Navigator, diagnostics, and settings surfaces without direct feature coupling. - diagnostics channels provide one shared receipt path for viewer, workbench, settings, history, and subsystem integrity behavior.
Gaps / active architectural work:
- Canonical docs/terminology wording still needs tightening around
SignalvsIntentvs direct calls (routing rules are defined here but not yet propagated everywhere) - Some authority-boundary misroutes are still too silent in fallback/no-op paths and should surface more explicitly during development
- Layout execution still uses
egui_graphsas the widget substrate, but algorithm ownership is now registry-owned throughLayoutRegistry+app/graph_layout.rs; remaining work here is stabilization, not missing authority structure - Omnibar suggestion-dropdown UI still has a legacy local candidate pipeline; only the submit/action
path is currently unified through
IndexRegistry; provider-fetch execution is now supervised, but candidate synthesis/ranking policy is still partly local to Shell UI code -
index:timelineremains future history work; the provider shape is planned, but no live timeline index source exists yet -
ModRegistrystill lacks a real WASM host / intent bridge, so Sector G is not fully closed - Theme activation is runtime-owned, but startup OS-theme detection and mod-provided theme activation remain open follow-ons
- The registry development plan cannot be archived yet;
RendererRegistry(Sector B) and the remaining Sector G mod follow-ons are still active work
-
The Register: Runtime composition root / infrastructure host (
RegistryRuntime) that owns registries, mod/runtime wiring, and supervises theControlPanel. - Control Panel: Async coordination/process host for workers that produce intents and background runtime tasks.
-
SignalBus: Typed event distribution fabric for decoupled publish/subscribe between registries, mods, subsystems, and observers. Implemented today as a trait facade over
SignalRoutingLayer.
This section defines the canonical decision rule for choosing between routing mechanisms. Every cross-module interaction in the codebase should map cleanly to one of these four rows.
Graph/workbench hierarchy note:
-
GraphViewIdis graph-owned scoped view identity - Graph Bar names and switches graph-owned targets one UI level above workbench hosting
- workbench authority may host or route a
GraphViewIdwithout becoming its semantic owner
| Mechanism | When to use | Authority boundary |
|---|---|---|
| Direct call | Same module / same struct; synchronous, co-owned state | No boundary crossing |
Current reducer carrier (GraphIntent / reducer-intent path) |
Mutation of reducer-owned semantic graph data model; must be deterministic, testable, WAL-logged | Graph Reducer boundary |
WorkbenchIntent (frame-loop intercept) |
Mutation of tile-tree shape (egui_tiles); workbench authority owns layout |
Workbench Mutation Authority |
Signal / SignalBus |
Decoupled cross-registry or cross-subsystem notification; emitter must not know observer | Register-owned signal layer |
The architecture has two distinct mutation authorities โ not one:
1. Graph Reducer (apply_reducer_intents in graph_app.rs)
Authoritative for:
- Graph data model (nodes, edges, selections)
- Node/edge lifecycle transitions (Cold โ Warm โ Hot and reverse)
- Traversal history, WAL journal, undo/redo checkpoints
- WebView โ node mapping (
MapWebview,UnmapWebview)
Properties: always synchronous, always logged, always testable in isolation.
2. Workbench Authority (Gui frame loop, tile_behavior.rs, tile_view_ops.rs)
Authoritative for:
- Tile-tree shape mutations (splits, tabs, pane open/close/focus)
-
TileKindpane insertion, removal, and focus changes - hosted presentation of graph-owned
GraphViewIdsurfaces once routing has been resolved
The tile tree is an egui_tiles construct, not graph state. Tile mutations do
not need the WAL, the graph reducer, or ControlPanel involvement.
Intents that cross from workbench into the graph reducer (e.g. OpenToolPane
dispatched during a graph interaction) belong in the Workbench Authority
intercept path (handle_tool_pane_intents in shell/desktop/ui/gui.rs). This is correct
architecture โ not a gap โ provided the intercept is documented and
consistent.
Current status (2026-03-10):
- The old silent-no-op gap has been addressed by a reducer-side warning/classification seam for graph-carrier intents that are actually workbench-authority bridges.
-
WorkbenchSurfaceRegistrynow exists and is the concrete workbench authority object reached by the frame-loop adapter path. - The practical bridge seam is still not raw
WorkbenchIntentreachingapply_intents()by type; it is graph-carrier bridge intents such asRouteGraphViewToWorkbenchreaching reducer ingress before being forwarded to workbench authority. - Workflow lifecycle changes now publish through the Register signal-routing layer, and Sector D now provides runtime-stateful canvas/physics/layout authorities for those activations.
- Semantic-index changes now also publish through the Register signal-routing layer, and the GUI-side observer path re-resolves registry-backed view lenses when those lifecycle notifications arrive.
-
Do not call
apply_intentsfor tile-tree mutations. Tile layout is workbench authority; routing it through the graph reducer couples layout to the WAL and makes tests harder. -
Do not use direct calls across registry boundaries. Two registries that
need to coordinate should use a Signal or delegate to
ControlPanel, not call each other's internal methods. -
Do not accumulate workbench state in
GraphBrowserAppworkspace fields. Legacy panel booleans have been removed; pane-open/close state must live in the tile tree. -
Do not bypass
ControlPanelfor background intent producers. All background tasks that produce current reducer-carrier values must go throughControlPanel's supervised worker model, not spawn independent threads. -
Do not bypass
ControlPanelfor Shell-owned short-lived background requests. Ephemeral UI-triggered work such as omnibar/provider lookups should use supervised host requests and frame-bound mailbox ingestion, not raw detached threads hidden in UI modules.
Carrier interpretation note:
-
GraphIntentis the active bridge carrier across much of the current runtime - this hub should not be read as freezing the top-level architecture around a permanently universal
GraphIntent - Register-layer routing and authority rules remain valid even if the carrier surface later evolves into
AppCommand/ planner / transaction layers
Goals:
- Keep
RegistryRuntimeas the composition root andControlPanelas the async coordinator (avoid collapsing terms prematurely) - Remove doc/code mismatches that imply a concrete
SignalBusimplementation where none exists - Make signal/event routing a named internal layer owned by the Register (even if still direct/transitional)
Done gates:
- Hub docs and terminology consistently describe
SignalBusas planned / equivalent abstraction (register_layer_spec.md,signal_bus_spec.md,TERMINOLOGY.md) -
ControlPanelAPIs and comments avoid implying ownership of registries -
RegistryRuntimeintegration issue follow-ups are linked from this hub (#81,#82)
Goals:
- Define typed signal envelopes and ownership (
signal types, payloads, source metadata, optional causality stamp) - Define publisher/observer contracts for registries and subsystems
- Provide an internal Register-owned routing facade that can start as direct fanout
Why:
- Registries need to publish/observe changes without direct wiring
- Mods need to trigger cross-registry workflows without point-to-point coupling
- Subsystem health/diagnostics propagation benefits from decoupled observers
- Future async/event-heavy coordination will otherwise push coupling into
ControlPanel - Shared UI projection surfaces (Navigator, settings pages, diagnostics panes) need one reusable invalidation path instead of feature-local refresh plumbing
Done gates:
-
Signaltype families and source metadata contract documented (runtime/registries/signal_routing.rs) - Register-owned routing API skeleton exists (facade/trait/module)
- At least one producer + two observers integrated through the routing contract (navigation producer + observer integration tests)
Goals:
- Implement publish/subscribe event distribution fabric under the Register
- Support typed signals and decoupled observers
- Preserve deterministic reducer semantics by keeping signal handling outside direct state mutation paths (or routing to intents where needed)
Core requirements:
- Publish/subscribe
- Typed signals
- Decoupled observers
- Backpressure/error handling policy (drop/coalesce/retry diagnostics must be explicit)
- Diagnostics hooks (queue depth, dropped signals, handler failures, latency)
Done gates:
- Register signal bus/fabric implementation replaces transitional direct routing in selected paths (navigation + lifecycle + registry/input event paths)
- Diagnostics channels report signal routing health (
register:signal_routing:*) - Mod-triggered cross-registry workflow path uses signal routing instead of direct wiring (mod lifecycle route via
phase3_route_mod_lifecycle_event) - Subsystem health propagation path uses signal routing (or equivalent observer fabric) (
phase3_propagate_subsystem_health_memory_pressure)
Goals:
- Complete migration away from remaining legacy desktop dispatch paths
- Make Register-owned provider wiring and event routing the canonical runtime path
- Keep
ControlPanelfocused on worker orchestration, not cross-registry policy dispatch
Done gates:
- Legacy dispatch callsites removed or wrapped behind Register APIs (phase0/phase2/phase3 runtime dispatch wrappers)
- Legacy dispatch callsites removed or wrapped behind Register APIs (phase0 navigation/provider-wired runtime dispatch slice; see
#82) - Internal settings workbench routes (
verso://settings/*canonical, legacygraphshell://settings/*alias) are pane-authority and no longer reducer-panel owned -
RegistryRuntime+SignalBusresponsibilities are documented and tested (SR2/SR3/SR4 routing tests + runtime dispatch tests) -
ControlPanelAPI surface reflects coordinator/process-host role only (queue internals private; coordination exposed via spawn/drain/shutdown and command/lifecycle policy APIs)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Frame Loop (sync) โ
โ drain intent_rx โ sort by causality โ apply_reducer_intents() โ
โ โ reconcile_webview_lifecycle() โ render() โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ intent_rx (non-blocking try_recv)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ControlPanel (async layer) โ
โ intent_tx โ CancellationToken โ JoinSet<workers> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ memory_monitor_worker (active) โ
โ mod_loader_worker (stub) โ
โ prefetch_scheduler (future) โ
โ p2p_sync_worker (in progress) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
/// Intent with source tracking and causality ordering.
#[derive(Debug, Clone)]
pub struct QueuedIntent {
pub intent: GraphIntent,
pub queued_at: Instant,
pub source: IntentSource,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub enum IntentSource {
/// User keyboard/mouse input
LocalUI,
/// Servo browser delegate (navigation, load completion, etc.)
ServoDelegate,
/// Memory/system monitor
MemoryMonitor,
/// Mod loader lifecycle events
ModLoader,
/// Background prefetch scheduler
PrefetchScheduler,
/// P2P sync worker
P2pSync,
/// Restore/replay from persistence
Restore,
}
pub struct ControlPanel {
/// Intent queue internals remain private; frame loop uses drain API.
intent_tx: mpsc::Sender<QueuedIntent>,
intent_rx: mpsc::Receiver<QueuedIntent>,
/// Shared cancellation token โ cancel() stops all workers.
cancel: CancellationToken,
/// Supervised set of background worker tasks.
workers: JoinSet<()>,
}pub fn run_frame(
app: &mut GraphBrowserApp,
control_panel: &mut ControlPanel,
// ... other args
) {
let mut all_intents = Vec::new();
// Local UI intents (synchronous)
all_intents.extend(collect_keyboard_intents());
all_intents.extend(collect_mouse_intents());
// Servo delegate events โ intents (synchronous)
all_intents.extend(graph_intents_from_pending_semantic_events());
// Async producer intents (non-blocking drain)
all_intents.extend(control_panel.drain_pending());
// Sort by causality for determinism
all_intents.sort_by_key(|intent| intent.causality_order());
// Apply atomically (pure state, no side effects)
apply_intents(app, all_intents);
// Reconcile (side effects: webview creation/destruction)
reconcile_webview_lifecycle(app);
render(app);
}Status: โ Complete (2026-02-23)
Goals:
-
QueuedIntent,IntentSource,ControlPanelstruct defined - Intent channel initialized (
mpsc::channelcapacity 256) -
CancellationTokenwired to all workers -
JoinSetsupervises all background tasks -
memory_monitor_workerstub spawned (emitsDemoteNodeToColdunder pressure) -
shutdown()cancels all workers and awaitsJoinSetdrain
Files:
-
desktop/control_panel.rs(new) -
shell/desktop/mod.rsโ exposecontrol_panelmodule
Done gates:
-
ControlPanel::new()creates channel + token + emptyJoinSet -
ControlPanel::spawn_memory_monitor()spawns first supervised worker -
ControlPanel::shutdown()gracefully stops all workers
Status: โ Complete (2026-02-24). Integrated into Gui frame loop.
Update (2026-02-24) โ ControlPanel fully wired into main app:
- Added
tokio_runtime: tokio::runtime::Runtimeandcontrol_panel: ControlPanelfields toGuistruct - Workers spawned in
Gui::new()within runtime context: memory monitor, mod loader, sync worker - Intent channel drained each frame via
control_panel.drain_pending()beforeapply_intents - Graceful shutdown in
Dropviatokio_runtime.block_on(control_panel.shutdown())
Goals:
- Mod loader worker supervises mod load/unload lifecycle events
- Failed mod loads emit
GraphIntent::ModLoadFailed { mod_id, reason }through the channel - Successful mod activations emit
GraphIntent::ModActivated { mod_id } - Mod worker respects cancellation token for graceful shutdown
Done gates:
-
ControlPanel::spawn_mod_loader()implemented - mod lifecycle intents defined in
GraphIntent(ModActivated, ModLoadFailed) - mod worker cancels cleanly on token signal
- Coordinated with Registry Phase 2:
NativeModActivationswired toModRegistry::activate_native_mod() - Verso native mod defined and registered at compile time
- Both Verse and Verso mods discoverable via
discover_native_mods()
Implementation Notes:
-
discover_native_mods()usesinventory::collect!()to gatherNativeModRegistrationentries -
resolve_mod_load_order()performs topological sort on mod dependencies -
ModRegistry::load_all()callsactivate_native_mod()for each mod in order -
NativeModActivationsdispatch table maps mod_id โ activation function - Mod activation is synchronous; long-running operations (I/O) would be spawned as sub-workers in future phases
Status: โ Complete (2026-03-05)
Goals:
- Periodically emit
PromoteNodeToActive { cause: SelectedPrewarm }based on graph heuristics - Subscribes to
watch::Receiver<LifecyclePolicy>for memory budget awareness - Uses exponential backoff on channel congestion
Done gates:
-
prefetch_scheduler_workerimplemented and spawned (supervised worker) -
LifecyclePolicywatch channel wired to scheduler (GUI frame updates policy each tick)
Implementation Notes (2026-03-05):
- Scheduler emits
GraphIntent::PromoteNodeToActive { cause: SelectedPrewarm }for the current prewarm target instead of placeholderNoopintents. - CP3 policy now carries selected-node prewarm target + memory pressure level; warning/critical pressure slows or disables prefetch cadence.
- Congestion policy remains exponential backoff bounded by
PREFETCH_MAX_INTERVAL.
Status: In progress (worker scaffold wired; reducer sync semantics pending) โ see system/2026-03-05_cp4_p2p_sync_plan.md
Goals:
- P2P device-sync worker consumes peer deltas and queues remote-sync carrier intents with version vector stamps
- Network failures emit explicit offline signaling (target
MarkPeerOffline; never silent) - Causality ordering via version vectors ensures convergence across all peers
Done gates:
- Peer discovery and rendezvous design complete (covered in Verso Tier 1 plan ยง2โ3)
-
p2p_sync_workerimplemented and supervised underControlPanel - Remote-sync reducer carrier semantics completed (runtime
ApplyRemoteLogEntriesalias and/or CP4 targetApplyRemoteDeltanaming) - Explicit peer-offline reducer path completed (
MarkPeerOfflinetarget behavior or equivalent status intent) - Version vector persistence wired into workspace state (note: "Lamport clock" in prior wording โ version vectors are the correct mechanism per Verso sync plan ยง4.3; see CP4 plan ยง5.3)
All workers follow the same cancellation contract:
impl ControlPanel {
pub fn new() -> Self {
let (intent_tx, intent_rx) = mpsc::channel(256);
Self {
intent_tx,
intent_rx,
cancel: CancellationToken::new(),
workers: JoinSet::new(),
}
}
pub fn spawn_memory_monitor(&mut self) {
let cancel = self.cancel.clone();
let tx = self.intent_tx.clone();
self.workers.spawn(async move {
tokio::select! {
_ = cancel.cancelled() => {}
_ = memory_monitor_worker(tx) => {}
}
});
}
pub async fn shutdown(&mut self) {
self.cancel.cancel();
while self.workers.join_next().await.is_some() {}
}
}
async fn memory_monitor_worker(tx: mpsc::Sender<QueuedIntent>) {
loop {
tokio::time::sleep(Duration::from_secs(5)).await;
let usage = estimate_memory_usage();
if usage > MEMORY_PRESSURE_THRESHOLD {
let _ = tx.send(QueuedIntent {
intent: GraphIntent::DemoteNodeToCold {
key: pick_lru_candidate(),
cause: LifecycleCause::MemoryPressureWarning,
},
queued_at: Instant::now(),
source: IntentSource::MemoryMonitor,
}).await;
}
}
}async fn app_main() {
let mut control_panel = ControlPanel::new();
control_panel.spawn_memory_monitor();
loop {
tokio::select! {
_ = frame_timer.tick() => {
run_frame(&mut app, &mut control_panel);
}
_ = signal::ctrl_c() => {
control_panel.shutdown().await;
break;
}
}
}
}The mod loader runs as a supervised worker under ControlPanel. Mods are loaded at startup and on demand, with lifecycle events emitted as intents:
async fn mod_loader_worker(tx: mpsc::Sender<QueuedIntent>) {
// Phase CP2: scan mod directory and attempt loads
for mod_path in scan_mod_directory() {
match load_mod(&mod_path).await {
Ok(mod_id) => {
let _ = tx.send(QueuedIntent {
intent: GraphIntent::ModActivated { mod_id },
queued_at: Instant::now(),
source: IntentSource::ModLoader,
}).await;
}
Err(reason) => {
let _ = tx.send(QueuedIntent {
intent: GraphIntent::ModLoadFailed { mod_id: mod_path.to_string(), reason },
queued_at: Instant::now(),
source: IntentSource::ModLoader,
}).await;
}
}
}
}Channel capacity is the primary defense against flooding from misbehaving workers:
| Capacity | Scenario |
|---|---|
| 64 | Single worker, local-only (testing) |
| 256 | Default: memory monitor + mod loader + prefetch |
| 512+ | Multi-peer P2P with high-latency WAN |
Workers respect backpressure via .await on tx.send():
// Worker blocks if channel is full (main loop too slow to drain)
tx.send(intent).await.ok();
// For non-critical intents, prefer try_send + log drop:
if tx.try_send(intent).is_err() {
log::debug!("control_panel: intent queue full, dropping non-critical intent");
}Async producer intents carry causality metadata for deterministic application:
impl QueuedIntent {
pub fn causality_order(&self) -> (u64, IntentSource) {
match &self.intent {
// CP4 runtime scaffold: remote batches arrive through a carrier intent.
// Full CP4 convergence will derive ordering from version-vector metadata.
GraphIntent::ApplyRemoteLogEntries { .. } => (0, self.source),
// Local intents have implicit clock 0 (applied first)
_ => (0, self.source),
}
}
}Local UI intents (clock 0) always apply before async producer intents, preserving responsiveness. CP4 target convergence uses version-vector causality; runtime scaffold currently preserves worker batch order.
Background workers that need app policy (memory limits, retention) subscribe via watch:
pub struct LifecyclePolicy {
pub active_webview_limit: usize,
pub warm_cache_limit: usize,
pub memory_pressure_threshold: f32,
}
// Wired in ControlPanel::new()
let (policy_tx, policy_rx) = watch::channel(LifecyclePolicy::default());
// Workers receive policy_rx clone at spawn time
async fn prefetch_scheduler(mut policy_rx: watch::Receiver<LifecyclePolicy>, tx: ...) {
loop {
tokio::select! {
_ = policy_rx.changed() => { /* adjust on policy change */ }
_ = timer.tick() => {
let policy = policy_rx.borrow();
// use policy.warm_cache_limit for prefetch decisions
}
}
}
}// CP1 (done gates)
#[test]
fn control_panel_new_creates_empty_joinset() { ... }
#[test]
fn control_panel_shutdown_stops_all_workers() { ... }
#[test]
fn memory_monitor_emits_demotion_intent_under_pressure() { ... }
// CP2
#[test]
fn mod_loader_emits_activated_intent_on_success() { ... }
#[test]
fn mod_loader_emits_failed_intent_on_load_error() { ... }
// CP3/CP4
#[test]
fn intent_ordering_deterministic_under_concurrent_producers() { ... }
#[test]
fn p2p_network_failure_emits_offline_intent_not_silent_drop() { ... }
#[test]
fn graceful_shutdown_drains_joinset_before_exit() { ... }| Phase | Status | What |
|---|---|---|
| CP1 | โ Done |
ControlPanel struct, channel, memory monitor stub, shutdown |
| CP2 | โ Done | Mod loader worker + mod lifecycle intents |
| CP3 | โ Done | Prefetch scheduler + LifecyclePolicy watch + selected-prewarm promotion intents |
| CP4 | In progress | P2P device-sync worker scaffold landed; reducer sync semantics and persistence follow-up |
- archived embedder decomposition plan โ Stage 5 overview
- 2026-02-21_lifecycle_intent_model.md โ Intent schema and lifecycle state machine
- 2026-02-22_registry_layer_plan.md โ The Register architecture and provider wiring
- PLANNING_REGISTER.md โ sequencing and backlog for Register/runtime follow-ups
- Crates:
tokio,tokio-util(CancellationToken),tokio::task::JoinSet
- protocol_registry_spec.md
- index_registry_spec.md
- viewer_registry_spec.md
- layout_registry_spec.md
- theme_registry_spec.md
- physics_profile_registry_spec.md
- action_registry_spec.md
- identity_registry_spec.md
- mod_registry_spec.md
- agent_registry_spec.md
- diagnostics_registry_spec.md
- knowledge_registry_spec.md