Single Static Binary
CGO_ENABLED=0 release builds produce a ~22.2 MB artifact for the runtime surface.
OPEN SOURCE · MIT LICENSE · EARLY SCOUT RELEASE
Gormes runs AI agents as a single static binary.
No Python runtime. No virtualenv repair. No backend service just to open the UI.
Start offline, prove the machine works, then add provider and gateway credentials.
Scout release. Useful today, still early.
Offline TUI, doctor diagnostics, provider one-shots, Goncho memory, dashboard, and configured Telegram/Discord/Slack paths are covered. Full parity is still hardening.
INSTALL
Start with the inspectable source path. The first proof does not need credentials, a model call, Python, Docker, or Hermes. No runtime Node or npm is required.
1. BUILD FROM SOURCE
git clone https://github.com/TrebuchetDynamics/gormes-agent.git
cd gormes-agent
make build
2. OFFLINE TUI
./bin/gormes --offline
3. LOCAL DOCTOR
./bin/gormes doctor --offline
Provider setup, gateway setup, and convenience installers come after the offline proof. Read the install docs ->
WHY GORMES
The model is not usually the fragile part. Operations are:
CGO_ENABLED=0 release builds produce a ~22.2 MB artifact for the runtime surface.
./bin/gormes --offline starts the native TUI without credentials, network calls, Python, Node, Docker, or Hermes.
./bin/gormes doctor --offline checks local readiness before provider calls or token spend.
One-shots and the TUI use configured provider-compatible endpoints from the same binary.
Sessions, durable context, diagnostics, and queue state stay in local SQLite.
Full Hermes parity, broad channel parity, voice/TTS, MCP/plugin parity, and release hardening remain in progress.
BUILD STATE
Production-stable Go-native runtime with signed releases and broader Hermes parity
The offline path proves the runtime before provider calls, gateway traffic, or token spend.