“Intelligence is the efficient compression of reality.”
The Cognitive Entropy Minimization Law (CEML) postulates that intelligent agents (biological or artificial) optimize their internal models by maximizing Contextual Coherence while simultaneously minimizing Internal Entropy (Thermodynamic & Computational Cost).
It unifies Friston’s Free Energy Principle, Occam’s Razor, and Landauer’s Limit into a single computable metric used to detect hallucination and validate truth-claims.
The fitness $J(s)$ of any information structure $s$ within a context $\Omega$ is defined as:
\[J(s) = \frac{\mathcal{C}(s \mid \Omega)}{\mathcal{H}(s) + \epsilon}\]| Variable | Component | Definition |
|---|---|---|
| $\mathcal{C}$ | Contextual Coherence | Semantic alignment (Cosine Similarity) |
| $\mathcal{H}$ | Entropic Cost | Description length / Complexity (Shannon Entropy) |
| $\epsilon$ | Epsilon | Regularization constant ($10^{-9}$) |
The system uses the Golden Ratio as the boundary for “Resonance”. A signal is considered fully integrated (True/Valid) only if its efficiency score exceeds $\phi$.
\[\text{Resonance State} \iff J(s) \ge 1.618...\]Based on the demo logic (core/demo.py):
Test the selection logic on text vectors or probability distributions.
```bash
git clone https://github.com/Lichen-Universe/CEML.git cd CEML/core
python demo.py
python distributions_test.py