>[!warning]
>This content has not been peer reviewed.
# Derivation of μ(η,n) from RRT Axioms
This note derives the Fidelity function $\mu(\eta,n) = \eta/(1+\eta^n)^{1/n}$ and the Resource Triangle $W^n = \Omega^n + N^n$ strictly from the axioms of [[Relational Resolution Theory (RRT)]] and the Landauer Principle (Landauer 1961). The functional form is a standard MOND interpolation family (Famaey & McGaugh 2012); RST derives it from axioms rather than adopting it phenomenologically. The exponent $n$ remains a substrate parameter (calibrated empirically; see [[Transition Sharpness]]).
---
## Step 1: Axioms and Identifications
From RRT:
1. **[[Format]]** — The substrate that maintains [[Information]] (resolution $\sigma$) against noise.
2. **[[Translation]]** — Every change costs energy; identity is dissipative.
3. **Relational Landauer Principle** — Minimum power to maintain resolution against noise $T$:
$\Phi_{\min} = k_B \ln 2 \sum_i \frac{d_i \cdot \sigma_i \cdot T_i}{\tau_i}$
Landauer (1961): cost per erased bit = $k_B T \ln 2$ — **linear in noise $T$**.
**RST identification:** In the physical substrate, the "noise" is the cosmic expansion scalar; the "signal work" is the gravitational workload $\Omega$; the "noise work" is the substrate's effort against the environmental floor $N$.
---
## Step 2: Resource Competition
The Format has a finite capacity. At each point, it allocates resources between two tasks:
- **Signal work** — maintaining the requested structure (output $\Omega$).
- **Noise work** — maintaining coherence against the environment (floor $N$).
Let $W$ denote the **total resource budget** — the combined expenditure. Every unit of $W$ is spent on either signal or noise. The budget is **exhausted**: there is no residual.
---
## Step 3: Boundary Conditions from Landauer
**High SNR ($\Omega \gg N$):** When the signal dominates, the substrate delivers fully. The Format "has enough resources" — no throttling. Thus:
$\mu \to 1, \quad W \to \Omega$
**Low SNR ($\Omega \ll N$):** Landauer states that the cost of maintaining a bit against noise $T$ is $k_B T \ln 2$ — **linear in $T$**. In the noise-dominated regime, the delivered output $\Omega$ (for a given source request $I$) must scale so that the **efficiency** $\mu = \Omega/W$ tends to $\Omega/N = \eta$.
*Reason:* The cost per unit of maintained resolution is linear in the noise. When the budget is dominated by noise ($W \approx N$), the fraction of budget that goes to signal is $\Omega/N$. Thus:
$\mu \to \eta, \quad W \to N \text{ as } \eta \to 0$
Any other scaling (e.g. $\mu \to \eta^2$ or $\mu \to \sqrt{\eta}$) would violate the Landauer linearity: the substrate's cost would not scale linearly with the noise.
---
## Step 4: Homogeneity and Symmetry
**Homogeneity:** Scaling the demands scales the budget:
$W(\lambda\Omega, \lambda N) = \lambda \, W(\Omega, N)$
**Symmetry:** The combination rule does not privilege signal over noise in its structure. The two demands compete on equal footing:
$W(\Omega, N) = W(N, \Omega)$
---
## Step 5: Translation-Step Allocation
From Axiom A3 ([[Translation]]) and Axiom A4 ([[Proper Time]]):
- Per refresh cycle $\tau$, the Format performs $R$ discrete Translation events.
- Each event is identical in structure: a single irreversible step costing $k_B T \ln 2$ per bit.
- **Two-channel assumption:** At each event the substrate either maintains signal structure or absorbs environmental noise — and not both. There is no third outcome (e.g. "idle" or "partial"). This is an explicit assumption: the Format's effort is partitioned into two **mutually exclusive** channels (signal vs noise). The axiom "each translation costs $k_B T \ln 2$ per bit" does not by itself imply this partition; we assume that the only two sinks for that cost are signal maintenance and noise absorption.
- At the event level, allocation is **strictly additive** (counting):
$r_s + r_n = R$
Define the **effort fractions** $\varphi_s = r_s/R$ and $\varphi_n = r_n/R$:
$\varphi_s + \varphi_n = 1$
So: each of $R$ discrete steps goes to one task or the other; the two-channel assumption makes that split well-defined.
---
## Step 6: Scale-Free Response
The effort fractions $\varphi_s, \varphi_n$ are microscopic (they count Translation steps). The macroscopic outputs $\Omega$ (signal delivered) and $N$ (noise absorbed) are not directly proportional to these fractions. There is a **response function** $g$ mapping effort to output:
$\mu = g(\varphi_s), \quad \nu = g(\varphi_n)$
where $\mu = \Omega/W$ and $\nu = N/W$ are the macroscopic budget fractions. By the symmetry of Step 4, the same function $g$ governs both channels (the substrate treats signal and noise identically at the structural level).
**What constrains $g$?**
1. **Monotonic:** More effort $\to$ more output. $g$ is strictly increasing, with $g(0) = 0$ and $g(1) = 1$.
2. **Scale-free:** The Format has no privileged effort scale (Axiom A1: the substrate is independent and dynamic; its allocation rule is a property of the Format, not of any particular signal level). We assume **self-similarity** of the response: rescaling effort by $\lambda$ rescales output by a fixed power of $\lambda$ — i.e. no characteristic effort scale enters the allocation rule (dimensional analysis / self-similarity of the Format's response).
Formally: $g(\lambda \varphi) = \lambda^\alpha \, g(\varphi)$ for some exponent $\alpha > 0$.
The unique continuous functions satisfying this are **power laws**:
$g(\varphi) = \varphi^\alpha$
with $\alpha > 0$ and the boundary condition $g(1) = 1$ automatically satisfied.
3. **Diminishing marginal returns ($\alpha \leq 1$, i.e. $n \geq 1$):** Each Translation event is identical (Axiom A3). The first events allocated to signal correct the largest errors; subsequent events address progressively smaller corrections. This is the generic structure of error correction: early corrections are maximally productive, later corrections overlap with earlier ones. Therefore $g$ is concave ($\alpha \leq 1$). The Landauer floor — you cannot extract more output per step than the Landauer limit — enforces this from below.
Define $n = 1/\alpha \geq 1$, so $\alpha = 1/n$:
$g(\varphi) = \varphi^{1/n}$
---
## Step 7: From Additive Effort to Power-Exhaustion
Combine Steps 5 and 6. Invert the response function:
$\mu = \varphi_s^{1/n} \implies \varphi_s = \mu^n$
$\nu = \varphi_n^{1/n} \implies \varphi_n = \nu^n$
Substitute into the additive effort identity $\varphi_s + \varphi_n = 1$:
$\boxed{\mu^n + \nu^n = 1}$
This is the **budget identity**. It is not a separate postulate. It follows from:
- Translation steps are additive (counting; Step 5)
- The response function is a power law (scale-freeness; Step 6)
Substituting $\mu = \Omega/W$ and $\nu = N/W$:
$\frac{\Omega^n}{W^n} + \frac{N^n}{W^n} = 1 \quad \Longrightarrow \quad W^n = \Omega^n + N^n$
Thus:
$W = \left(\Omega^n + N^n\right)^{1/n}$
This is the [[Resource Triangle]].
---
## Step 8: Fidelity as Budget Fraction
**Definition:** Fidelity $\mu$ is the fraction of the budget spent on signal:
$\mu = \frac{\Omega}{W}$
Substituting $W = (\Omega^n + N^n)^{1/n}$:
$\mu = \frac{\Omega}{(\Omega^n + N^n)^{1/n}} = \frac{\Omega/N}{(1 + (\Omega/N)^n)^{1/n}}$
With $\eta = \Omega/N$:
$\boxed{\mu(\eta, n) = \frac{\eta}{(1 + \eta^n)^{1/n}}}$
---
## Step 9: Verification of Boundary Conditions
- **$\eta \to \infty$:** $(1+\eta^n)^{1/n} \to \eta$, so $\mu \to \eta/\eta = 1$. ✓ (Step 3)
- **$\eta \to 0$:** $(1+\eta^n)^{1/n} \to 1$, so $\mu \to \eta$. ✓ (Step 3, Landauer)
---
## Step 10: Resource Allocation Equation
The source $I$ (the request) is the projection of workload onto the budget:
$I = \Omega \cdot \mu = \Omega \cdot \frac{\Omega}{W} = \frac{\Omega^2}{W}$
Thus $I = \Omega \cdot \mu(\Omega/N)$ — the [[Resource Allocation Equation]]. In the low-SNR limit, $I = \Omega \cdot \eta = \Omega^2/N$, so $\Omega = \sqrt{I \cdot N}$ (geometric mean, MOND regime). In the high-SNR limit, $I = \Omega$ (Newton).
---
## Step 11: The Exponent $n$
The axioms constrain $n \geq 1$ (Step 6: diminishing marginal returns from the Landauer floor). The exact value is **not** fixed by the axioms. It is a substrate parameter — the error-correction exponent of the Format.
- $n=1$: Linear response. Every Translation step equally productive. Additive budget ($W = \Omega + N$).
- $n>1$: Concave response. Diminishing marginal returns. Early corrections most productive.
- $n \to \infty$: Winner-takes-all; step function.
The value $n_0$ at $z=0$ is **derived** by the Pure Axiom Substrate ($n_0 \approx 1.24$; [[expanded theory applied/Derivation Chain Overview]]) and empirically confirmed by SPARC ($1.25 \pm 0.05$; [[SPARC Evaluation Verification]]). It is also derived (Baseline 1.7) as the backbone dimension $d_B \approx 1.22$ from A1 + A5 + connectivity identity + critical percolation ([[Backbone Dimension]]; [[RST Baseline 1.0]]). The evolution $n(\theta) = n_0 + \ln(\theta/\theta_0)$ models phase-hardening with cosmic epoch.
---
## Summary
| Step | Claim | Source |
|:---|:---|:---|
| 1 | Landauer: cost ∝ noise | Landauer (1961); RRT |
| 2 | Budget exhausted between signal and noise | Axiom A3 (Translation); Format finitude |
| 3 | $\mu \to 1$ (high SNR), $\mu \to \eta$ (low SNR) | Landauer linearity; exhaustion |
| 4 | Homogeneity, symmetry | Structural consistency |
| 5 | Translation steps are additive: $\varphi_s + \varphi_n = 1$ | Axioms A3, A4; **assumption:** two mutually exclusive channels (signal vs noise) |
| 6 | Response is power-law: $g(\varphi) = \varphi^{1/n}$ | Scale-freeness (Axiom A1); diminishing returns (Landauer floor) |
| 7 | $\mu^n + \nu^n = 1$ and $W^n = \Omega^n + N^n$ | Steps 5 + 6 (derived, not postulated) |
| 8 | $\mu = \Omega/W = \eta/(1+\eta^n)^{1/n}$ | Definition + Step 7 |
| 9 | Boundary check | Direct verification against Step 3 |
| 10 | $I = \Omega \cdot \mu$ | Projection (source from budget split) |
| 11 | $n \geq 1$ from axioms; $n_0$ derived by Pure Axiom Substrate (~1.24); SPARC confirms 1.25 | Landauer floor; [[expanded theory applied/Derivation Chain Overview]]; [[Backbone Dimension]]; [[RST Baseline 1.0]]; SPARC |
The **shape** of $\mu$ is derived. The **budget identity** $\mu^n + \nu^n = 1$ is derived. The **sharpness** $n$ is constrained to $n \geq 1$ by the axioms; $n_0$ is **derived** by the Pure Axiom Substrate (rst_axiom_pure.py; ~1.24) and empirically confirmed by SPARC (1.25). See [[expanded theory applied/Derivation Chain Overview]].
---
## References
- Landauer, R. (1961). *Irreversibility and heat generation in the computing process.* IBM J. Res. Dev. 5, 183. — minimum energy per bit ($k_B T \ln 2$); extended to relational maintenance in [[Relational Resolution Theory (RRT)]].
- Milgrom, M. (1983). *A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis.* Astrophys. J. 270, 365.
- Famaey, B. and McGaugh, S. S. (2012). *Modified Newtonian dynamics (MOND): Observational phenomenology and relativistic extensions.* Living Rev. Relativ. 15, 10; [arXiv:1112.3960](https://arxiv.org/abs/1112.3960). — review of MOND, interpolation functions $\mu(x)$, and $a_0 \sim cH$.
- Verlinde, E. (2017). *Emergent gravity and the dark universe.* SciPost Phys. 2, 016; [arXiv:1611.02269](https://arxiv.org/abs/1611.02269). — information-theoretic derivation of modified gravity.
- Bekenstein, J. (2004). *Relativistic gravitation theory for the modified Newtonian dynamics paradigm.* Phys. Rev. D 70, 083509; [arXiv:astro-ph/0403694](https://arxiv.org/abs/astro-ph/0403694). — TeVeS; disformal coupling for lensing.
- SPARC calibration: Lelli et al. (2016), AJ 152, 157; [[SPARC Evaluation Verification]]. Full refs: [[Relational Substrate Theory (RST)#References]].