2026-02-22: Working Memory & Memory Traces
UM-native working memory: the model is an agent. O is both prediction and input. Shift chain, conjunction patterns, threshold creation. Plus: the compression pipeline, memory traces as boolean algebra, and the unigram memory trace under E→N→Q.
Papers
Working Memory in the Universal Model
7 pages. The model is an agent: O is both prediction and input. No separate ESI needed. Training completion: LPPs freeze after full dataset, retroactive pass for late-born neurons. ES clearing = biological default (transient firing). SN DYNAMICS block specifies boundary timing. Conjunction patterns = product events from LPP joint observations, naturally hierarchical. Imperative vs annotative design decision recorded.
The Unigram Memory Trace
Product encoding vs arithmetic coding. E→N→Q analysis of unigram compression. 256 bytes summarize 109 observations. Tenv=30. LSI gap (0.522 bpc) = cost of generalization, not overfitting. Product encoding is readable in the middle; AC is not.
The Memory Trace
Definition paper. A memory trace is a sequence of joint events encoded via E→N as a polynomial in the primes. Interior events as projections. Boolean algebra of the forward pass. ES clearing triggers write events. Decomposition into model (quotient) + residual (ordering). The factor tower from unigram to KN-6.
Models
wm-blank.sn
Blank agent model: 2 ESs, empty LPP. Learns from scratch — every bigram is a new neuron. Best for watching online learning in the viewer.
wm-bigram-4k.sn
Working-memory bigram, 4K bytes. Agent model: byte_output + byte_prev, no byte_input. 535 learned bigrams. Load with: umr run wm-bigram-4k.sn enwik9 4096
Viewer
Open in UM Viewer
Select "wm-bigram-4k.sn (4K, agent)" from the Working Memory group in the model dropdown.
Trace Viewers
8 specialized visualizations of the UM forward pass. Each loads SN + raw data and computes traces client-side.
a. Error Anatomy
Classifies errors as sharp-wrong, missing-coverage, or unpredictable. Pie chart, histogram, scrollable error list.
b. Pass Comparison
Loads two SN models side by side, compares surprise per position. Difference heatmap.
c. Coverage Depth
LPP coverage heatmap — how many neurons fire per byte, when neurons are born.
d. Neuron Lifecycle
Timeline of LPP creation and growth. Best with wm-blank.sn to watch online learning.
e. Sentence Algebra
Boolean sentence tree — the forward pass as ∨/∧ over support values.
f. Skyhook View
Context features (in_tag, word_len, xml_depth) computed from raw bytes, overlaid on surprise.
g. Shift Chain Animation
6-phase animated circuit diagram of f. Comma/period keys step through phases.
h. Distribution View
Full 256-byte output distribution bar chart. Support values, entropy, s1-s2 gap, top-20 predictions.
See Also
LATD Explainer (20260312)
Interactive three-regime (L→A→T) decomposition. The full realization of the LATD principle defined in the Memory Trace paper above.