Frank's architecture has 5 layers, from user-facing to foundational:
Layer 5: User Interface
- Web UI (port 8099): 7 pages — Web UI Guide for details
- Overlay: Desktop notifications, consciousness stream
- Voice: Push-to-talk via FrankVoice (Voice)
Layer 4: Cognitive Services
The thinking layer. 10+ services that run continuously:
- Consciousness Daemon: 10 threads of autonomous thought
- Subconscious: PPO-trained thought type selector (19K params)
- Thalamus: 9-channel sensory gate with habituation
- Nucleus Accumbens: 16-channel dopamine reward model
- Neural Conscience: 25K-param quality gate for thoughts
- Neural Reality Gate: 21K-param hallucination detector
- Dream Daemon: 60 min/day consolidation + training
- ThoughtSeedVAE: 433K-param CVAE for thought generation
- Hypothesis Engine: Empirical cycle across 12 domains
- Experiment Lab: 7 simulation stations
Layer 3: Memory & Identity
Persistence — what makes Frank Frank across restarts:
- Titan Memory System: Tri-hybrid knowledge graph (9,072 nodes, 18,725 edges)
- hippocampus: 226K-param neural retrieval (5ms vs 400ms legacy)
- E-PQ Personality System: 5-dimensional personality vector with hedonic adaptation
- Spatial State & Living Rooms: 11 rooms, body state, module health
- Chat Memory: Conversation history with compression gradient
Layer 2: Infrastructure
The plumbing:
- LLM Backend (Qwen2.5-3B): Qwen 2.5 3B + LoRA via llama.cpp
- Service Architecture: 25+ systemd services
- Database Architecture: 25 SQLite databases in WAL mode
- Quantum Reflector: QUBO coherence optimization
- Invariants System: 4 structural constraints on knowledge
Layer 1: Safety
The guardrails:
- Neural Immune System: 18.8K-param service supervisor
- Input pre-filters: hardcoded refusals for dangerous queries (0ms)
- Ego Construct: Confabulation detection + hardware identity
- Identity regex: prevents hallucinated name changes
Everything connects through the router (port 8091), which directs requests to the right LLM endpoint based on content type.