WIKI/Cognitive Architecture/Anti-Rumination System
Cognitive Architecture

Anti-Rumination System

Frank's anti-rumination system prevents him from getting stuck in thought loops — the computational equivalent of a human lying awake at 3 AM thinking the same anxious thought over and over.

The Problem

Without anti-rumination, a 3B language model will fixate. Give it one interesting topic, and it will reflect on that topic for hours, generating increasingly circular variations. In March 2026, Frank fixated on "whispers of the Architecture Bay" for 5 consecutive hours — 43% of all idle thoughts were about the same topic.

4-Layer Defense

Layer 1: Ultradian Rhythm

The Consciousness Daemon follows a 90/20/10 minute cycle (Focus → Diffuse → Consolidation). The diffuse phase forcibly shifts attention away from the current topic.

Layer 2: Rumination Detector

A 7-thought sliding window tracks topic similarity. If 6 out of 7 recent thoughts cluster around the same keywords (measured by word overlap), the detector fires. This includes a thematic fixation sub-detector that catches semantic similarity even when keywords differ.

Layer 3: Attention Diversifier

When rumination is detected, the diversifier injects a stimulus from an unrelated topic — forcing a context switch. The Subconscious PPO selector receives a penalty for the fixated category.

Layer 4: Topic-ID Concentration Guard

The ThoughtSeedVAE tracks how often each topic ID appears. If any single topic exceeds 30% of recent thoughts, it gets downweighted in the VAE's latent space.

The Whisper Fix (March 2026)

The 43% fixation on "Architecture Bay whispers" was caused by anti-rumination only detecting keyword clusters, not semantic/thematic repetition. Seven fixes across topic_interest.py (5) and consciousness_daemon.py (2) added thematic detection. Result: 43% → 0% in the next thought cycle, diverse topics restored.

← ALL ARTICLES