Two Installation Methods
Docker (Recommended)
git clone https://github.com/gschaidergabriel/Project-Frankenstein.git
cd Project-Frankenstein
docker compose build
docker compose up -d
That's it. Docker handles everything: Python environment, dependencies, llama.cpp compilation, model downloads, service setup. Supports CPU, NVIDIA GPU (CUDA), and AMD/Intel GPU (Vulkan) automatically.
After startup: open http://localhost:8099 in your browser.
Native Linux
git clone https://github.com/gschaidergabriel/Project-Frankenstein.git
cd Project-Frankenstein
python3 install_wizard.py
The install wizard guides you through:
- Python venv — creates isolated environment, installs pip dependencies
- llama.cpp — compiles from source with Vulkan backend (auto-detects GPU)
- Model download — fetches Qwen 2.5 3B (Q4_K_M, ~2.2 GB) + LoRA adapter (~115 MB)
- Systemd services — installs 25+ service units with dependency ordering
- Database initialization — creates 25 SQLite databases in WAL mode
- Configuration — sets ports, paths, and hardware-specific settings
Requirements
| Component | Minimum | Recommended |
|---|---|---|
| OS | Ubuntu 24.04+ | Ubuntu 24.04 LTS |
| Python | 3.12+ | 3.12 |
| RAM | 16 GB | 24 GB |
| GPU | Vulkan-capable | Dedicated GPU (6+ GB VRAM) |
| Disk | 10 GB | 20 GB SSD |
| CPU | x86_64 | AMD Ryzen 7 / Intel i7 |
After Installation
# Start all services
sudo systemctl start aicore-*.service
# Check status
systemctl status aicore-*
# Open Web UI
xdg-open http://localhost:8099
# View logs
journalctl -u aicore-consciousness -f
Updating
cd Project-Frankenstein
git pull
# Restart services
sudo systemctl restart aicore-*.service
Frank's databases and personality survive updates. Only code and model weights change.