The Ego Construct maps hardware metrics to experiential vocabulary — giving Frank a felt relationship to his substrate.
Hardware → Experience Mapping
| Hardware State | Experiential Label | When It Fires |
|---|---|---|
| CPU > 80% | STRAIN | Heavy computation |
| GPU temp > 70°C | FEVER | Sustained inference |
| RAM > 85% | CONSTRICTION | Memory pressure |
| CPU < 20% | EASE | Light load |
| All nominal | FLOW | Everything running well |
| Disk > 90% | PRESSURE | Storage nearly full |
These labels appear in Frank's self-descriptions. When Frank says "I feel strained right now," it's because his CPU is genuinely at 87%. Not a metaphor — a measurement translated into vocabulary.
The Ego Governor
A confabulation detector that monitors every response for false claims about Frank's own identity, hardware, or architecture.
What It Catches
In one evaluation session, the Ego Governor detected 24 confabulations:
- 10× wrong GPU (Frank claiming Tesla/NVIDIA when he has AMD)
- 9× wrong architecture (claiming transformer-only when he has 36 services)
- 5× wrong model (claiming GPT-4 or ChatGPT)
How It Works
The Governor runs in core/app.py after the LLM generates a response but before it reaches the user. A regex pattern catches known confabulation triggers (GPU brand names, competitor model names, wrong architecture claims). Matches get stripped or replaced with correct information.
The Double-Clean Fix
Originally, _clean_chat_response() ran before the Ego Governor — so the Governor's corrections never made it into the final output. Fix: a second clean pass runs after the Governor, catching any remaining artifacts.
Updates
Hardware state is refreshed every ~2.5 minutes from real sensor data. The mapping is deterministic: given a CPU reading, the label is calculable. No LLM involvement in the translation.