>[!warning] >This content has not been peer reviewed. # Baseline results — interpretation and sanity check Vault note: interpretation of the Reality Engine baseline runs. Result graphs are inlined below; see **[[Reality Engine Results]]** for the full results index. --- ## What the numbers mean (RST reading) - **μ (fidelity):** Delivered identity; μ = η/(1+η^n)^(1/n) with η = S_eff/a₀. High μ = signal dominates noise. - **E (entanglement proxy):** Coherence between the two nodes; updated by shared_work = min(μ_A, μ_B). High E → lower effective distance (RT-style). - **a₀:** Noise floor ∝ H (expansion). Larger H → larger a₀ → lower η → lower μ unless S_eff is large enough. - **S_eff:** Node signal + bonus from E/distance. Weak base S and large distance → low S_eff → low η → low μ. So we expect: **strong signal (S) and moderate noise (H) → high μ and E; weak S or very high H → low μ, E never builds.** --- ## Run-by-run interpretation | Run | mean_mu_final | mean_E_final | step_E_half | steady_state_ok | Interpretation | |-----|----------------|--------------|-------------|-----------------|----------------| | canonical, low/high_budget, high/low_noise, short, long, far_start | ≈ 1 | 1 | 10 | 1 | **As expected:** η = S/a₀ is large (S=1e3, a₀~1e-10), so μ≈1. E reaches 0.5 by step 10 and saturates at 1. Entanglement builds quickly; fidelity stays high. | | low_S (S=1e-8) | ~1e-9 | ~0.025 | 80 | 0 | **Logic OK:** Initially η = 1e-8/a₀ ≈ 90, so μ can be high for a few steps. Then H grows (expansion + entropic tax when μ dips), a₀ grows, and with small S the system cannot sustain high μ → runaway to low μ. E never reaches 0.5. **Caveat:** convergence_step=0 and converged=1 because μ was ≥0.99 at step 0; use **steady_state_ok** for "ended well". | | very_low_S (S=1e-10) | ~5e-10 | ~0.014 | 80 | 0 | **As expected:** η = 1e-10/a₀ < 1 from the start, so μ stays low, E never builds (step_E_half=80 = "never"). converged=0, steady_state_ok=0. | **far_start:** mean_sep_final is larger (0.067 vs ~0.005) — nodes stay further apart; still μ=1 and E=1 because S=1e3 is strong enough. **long (120 steps):** Same parameters as canonical but 120 steps instead of 80. mean_mu_final = 1.0, mean_E_final = 1.0, step_E_half = 10, steady_state_ok = 1. **Interpretation:** Extended duration does not change the outcome: the system remains in high-fidelity regime (μ ≈ 1, E = 1) for the full run. The tail (last 25% = steps 90–120) has mean μ = 1 and mean E = 1; mean_d_mu_dt and mean_d_E_dt stay near zero (no drift). This confirms that the canonical two-node configuration is a stable attractor under the ledger dynamics — running longer does not cause decay or runaway. --- ## Logic - **RST mapping:** η = S_eff/a₀, μ(η,n) from core, E updated by shared workload — consistent with the theory. - **Feedback:** H increases when μ is low (entropic term); with weak S this can push the system into a low-μ basin. That is a plausible emergent effect, not a bug. - **Convergence metrics:** **converged** = ever reached μ≥0.99 (first time is `convergence_step`). **steady_state_ok** = 1 if mean μ in tail ≥ 0.9. For low_S, μ≥0.99 at step 0 then the run collapses, so converged=1 while the run did not end well; steady_state_ok=0. Reports that ask "did this run end well?" therefore use **steady_state_ok**. --- ## Python Implementation - **Engine:** update_cycle order (H update → a0 → path integral → E update → H entropic term) is consistent. shared_work = min(μ_A, μ_B) for E update is symmetric and reasonable. - **mu_rst:** Matches RST formula; η clamped to 1e-30 avoids div-by-zero. - **Baseline:** compute_derived uses numpy correctly; step_mu_90 / step_E_half are "first index where condition holds"; tail 25% for mean_mu_final is last 25% of steps. Central difference for d(mu)/dt uses (x[t+1]-x[t-1])/2 — correct. - **Sensitivity:** Finite-difference ∂(μ_final)/∂W and ∂(μ_final)/∂H with fixed seed; in the canonical regime μ_final≈1 for all, so derivatives ≈0 (numerical noise). That is expected, not a bug. **Definitions (documented in Code note):** converged = ever reached μ≥0.99; steady_state_ok = mean μ in tail ≥ 0.9. --- ## Summary - **Results are as expected** for the RST reading: high signal → high μ and E; very low S → low μ, no E build-up; low_S can transiently have high μ then collapse as H grows. - **Long run (120 steps):** Same as canonical in outcome; extended duration shows the high-fidelity state is a stable attractor (no drift in the tail). - **converged** and **steady_state_ok** differ: converged = ever reached μ≥0.99; steady_state_ok = mean μ in tail ≥ 0.9 (see definitions in Code note). For runs that collapse after a transient (e.g. low_S), only steady_state_ok reflects the final state. - Baseline and engine implementation match the intended dynamics; derived quantities (step_mu_90, tail mean, etc.) are computed as specified in the Code note. --- ## Result graphs (inlined) ![Two-node positions and fidelity](reality_engine_two_node_positions.png) ![Baseline trajectories: mu(t) and E(t) per run](reality_engine_baseline_trajectories.png) ![Overlay: mu_avg(t) across runs](reality_engine_baseline_overlay_mu.png) ![Summary bars: mean mu, step_mu_90, step_E_half](reality_engine_baseline_summary_bars.png) ![Time derivatives (canonical run)](reality_engine_baseline_derivatives.png) --- ## Links - **Results index:** [[Reality Engine Results]] - **Application:** [[Reality Engine (RST)]] - **Code:** [[Reality Engine - Code]]