The Sat-RNN as a 128-Bit Boolean Automaton

Experiment: q1_boolean, q1_margins, q1_bool_attractor — 2026-02-11
"It is a 128-bit Boolean automaton with a vestigial analog channel."

What This Experiment Shows

The sat-rnn is a small recurrent neural network (128 hidden neurons, trained on 1024 bytes of English Wikipedia) that achieves 0.079 bits per character. It uses 32-bit floating-point (f32) numbers to represent each neuron's activation, giving it a state space of 24096 possible states per time step.

The central question: does the network actually use all that precision?

The answer is no. The network's computation is almost entirely determined by the sign bits of its 128 neurons — a 128-bit binary string. The floating-point mantissa (the fractional part that gives f32 its fine precision) contributes essentially nothing to inference. In fact, removing the mantissa makes the model better.

"The tanh activation and f32 mantissa exist for training (gradient flow through the saturation gate); at inference, the Boolean function encoded in the weight signs and magnitudes is the entire computation."
— boolean-automaton.tex

The Numbers at a Glance

98.9%
neuron-steps where |z| > 1.0 (Boolean exact)
60.5
mean pre-activation margin
99.7%
of compression carried by sign bits
0.11%
"fragile transitions" where mantissa could matter
1023
unique sign vectors in 1023 positions (every state unique)
4.7×10-5
max mantissa perturbation to pre-activation z

The "margin" measures how far each neuron's pre-activation is from the tanh threshold. A margin of 60.5 means the average neuron is so deeply saturated that its sign is determined with a safety factor of roughly 106 over the maximum possible mantissa perturbation. This is not approximate — the Boolean function is exact for 98.9% of all neuron-steps.

Margin Analysis: f32 vs Sign-Only

At each position in the input, we compare the full f32 model's prediction against a sign-only model (where every neuron value is replaced by ±1). If the sign carries all the information, these should produce similar predictions.

BPC: Full f32 vs Sign-Only (per position)

Most positions show near-zero BPC for both models. Occasional spikes (where the model is uncertain) may differ between f32 and sign-only, but sign-only is often better. The spikes correspond to hard-to-predict bytes.

Mean Margin and Fragile Neurons per Position

The mean margin hovers around 1.2-1.5 (measured as mean across neurons at each position). Even positions with low margins have only a handful of "tiny" neurons (|z| < 0.1) out of 128.

The Mantissa Is Noise

To test whether the mantissa matters, we ran the model in five configurations. The results are striking: every variant that removes mantissa information outperforms the full f32 model.

BPC by Precision Level

ConfigurationBPCvs Full f32Description
Zero-mantissa dynamics5.582-0.139Run dynamics with sign+exponent only, read out with sign+exponent
Zero-mantissa readout5.637-0.084Run full dynamics, but read out with sign+exponent only
Sign-only dynamics5.690-0.031Run dynamics with ±1 neurons, read out with ±1
Full f325.721Standard 32-bit floating point (the trained model)
Sign-only readout5.728+0.007Run full dynamics, but replace h with sgn(h) at readout
"Zeroing the mantissa improves bpc. The mantissa is not a graded resource — it is interference."
— h32.tex
Key insight: The full f32 model performs worst among the sign-aware variants. The mantissa adds noise, not signal. This makes sense: the model was trained in f32 and found a local minimum tuned to f32 dynamics, but the underlying function it learned is Boolean. Removing the mantissa removes the noise.

Bit Leverage: 300:52:1

Not all bits in a 32-bit float are created equal. We measured how much each bit contributes to the model's predictions by computing the KL divergence when that bit is randomized.

KL Divergence per Bit (log scale)

0.046
Sign bit KL (1 bit)
0.008
Exponent KL (per bit, 8 bits)
0.00044
Mantissa KL (per bit, top 8 bits)

The sign bit carries 300 times more information per bit than the exponent, and the exponent carries 52 times more than the mantissa. Mantissa bits 0-4 contribute less than 10-6 KL — effectively zero.

"The mantissa was the ladder. The mantissa enables gradient flow during training... Without the mantissa, there is no gradient. But at inference time, the mantissa is pure overhead — the Boolean dynamics works better."
— h32.tex, narrative.tex

The Boolean Influence Graph

If the model is a Boolean automaton, we can ask: when one neuron flips its sign, which other neurons change? This gives us the influence graph — the wiring diagram of the Boolean function.

Top Influence Edges

Influence Statistics

0.031
Mean influence per edge
2.84
Mean output changes per input flip
50/50
Unique final states (no attractors)
Sparse influence despite dense weights: The mean influence per edge is just 0.031. The dense 128×128 weight matrix produces a sparse transition function because the large margins absorb most perturbations. Flipping one input neuron changes only ~2.84 outputs on average.
"The dense weight matrix produces a sparse transition function because the large margins absorb most perturbations."
— boolean-automaton.tex
EdgeInfluenceDescription
h112 → h730.423Strongest non-self edge
h16 → h1020.420
h8 → h80.408Self-loop (h8 maintains its own state)
h52 → h270.370
h112 → h590.358h112 is a hub (high out-degree)
h50 → h760.358
h8 → h520.352h8 drives h52
h76 → h590.349
h61 → h260.343
h8 → h900.334h8 drives h90

No Cycles, No Attractors

A Boolean automaton might settle into fixed points or cycles. This one does not. We tested by running 50 random initial states for 100 steps with a fixed input character.

50 trials, 50 unique final states. No convergence. No cycle detected up to period 100. Mean convergence step: 100.0 (none converged). The automaton is a transient-only system: every trajectory is unique, visiting a fresh state at every step.

This is consistent with the sign-vector uniqueness: 1023 positions in the data produce 1023 unique sign vectors out of 1023. The state entropy is maximal at 10.0 bits (log2(1023)).

"Neurons deep in saturation are 'committed' and carry no new information; neurons near the threshold are 'deciding' and carry the marginal signal."
— boolean-automaton.tex

Backward Attribution Chains

We can trace backward through the Boolean dynamics to see how information flows. At position t=42 (input character '/'), the top neurons and their source chains are:

Attribution Chain at t=42 (predicting 'x' with P=1.0000)

NeuronΔbpcSignz (pre-act)Top W_h SourceChain (depth 3)
h56+0.0056-1-5.06h50 (+1.08)h56 ← h50 ← h17
h8+0.0020+1+0.64h8 (-1.25)h8 ← h8 ← h8 (self-loop)
h68+0.0011+1+2.58h90 (-0.76)h68 ← h90 ← h8
h52+0.0008-1-1.06h8 (+1.28)h52 ← h8 ← h8
h99+0.0005+1+2.38h68 (-0.99)h99 ← h68 ← h90

h8 appears repeatedly — it is a hub neuron with a strong self-connection (W_h[8,8] = -1.25), functioning as an oscillator. Multiple chains route through h8.

Why This Matters

This result has implications for how we understand neural networks in general:

Training is for exploration; inference is Boolean. The continuous f32 representation and gradient-based training are a way to search for a good Boolean function. Once found, the continuous precision is unnecessary overhead. This is analogous to using real-valued relaxation to solve integer programs.
The "black box" has only 128 moving parts. At each step, the model computes a Boolean function from 128 sign bits (plus the input byte). The total effective state per step is not 24096 but 2128 — still vast, but structurally simple. Each neuron is a 1-bit decision.
Interpretability is tractable. Since the computation is Boolean, we can trace information flow through explicit sign-flip chains rather than wrestling with continuous gradients (which, as the Jacobian analysis shows, grow chaotically and become useless after a few steps).
"The mantissa is the price paid for differentiable training of a Boolean function."
— h32.tex

Source & Related

Papers: boolean-automaton.pdf (6 pages) • h32.pdf (H = 232) • q1-exact-results.pdf (f32 vs exact)

Programs: q1_boolean.cq1_margins.cq1_bool_attractor.cq1_bit_sample.c

Related experiments: Neuron KnockoutSaturation DynamicsOffset AnalysisPer-Prediction Justifications