WIKI/Infrastructure/Service Architecture
Infrastructure

Service Architecture

Frank runs as 25+ interconnected systemd services on a single Linux machine. No containers in production (Docker is for installation). No orchestration framework. Plain systemd with dependency ordering.

Service Map

Category Services Ports
Core core (API), router, toolbox 8088, 8091, 8096
LLM micro-llm (Qwen 3B + LoRA) 8105
Cognitive consciousness, subconscious, dream-daemon
Sensory thalamus, desktop-daemon
Memory titan (via core), hippocampus (embedded)
Reward nucleus-accumbens, hypothesis-engine
Safety invariants, neural-immune, ego-governor
Quality neural-conscience, neural-reality-gate
Identity e-pq, spatial-state
Visualization swarm (3D WebGL), web-ui 8098, 8099
Creative experiment-lab, thought-seed-vae
Communication webd (web search) 8100

Startup Order

Services start in dependency order via systemd After= directives. The LLM must be up before consciousness can think. The router must be up before anything can talk to the LLM. Core must be up before tools work.

Monitoring

The Neural Immune System system watches all services for anomalies. systemctl status aicore-* shows everything. journalctl -u aicore-consciousness -f tails the consciousness daemon logs in real time.

Resource Usage

Total system footprint: ~4-6 GB RAM, <5% CPU when idle, 2.8 GB VRAM for the LLM. All neural networks (conscience, reality gate, subconscious, immune, cortex, hippocampus) run on CPU — total ~800K parameters, negligible overhead.

← ALL ARTICLES