Lightweight Local Intelligence Layer
16 Total Brains: Site Operator, 0meg4kAI, Central Command, and 13 Cabinet Executive Brains.
I keep every brain in a real lane: it has a portrait, a cabinet scope, an operating surface, and a resume when it represents one of the executive profiles. The public brain wall sells the architecture; this page lets me test and operate the live runtime.
16 brains loaded locally
Live routes
The brains know where to send people.
I use the selected brain to decide which live surface belongs in the conversation. Sales goes to Celeste and the proof router. Client work goes to Adrian and Client OS. Security goes through 0meg4kAI and FS27. Founder direction stays with Gray.
Live Proof Router
Buyer pain becomes the right proof link instead of a link dump.
Sales Enablement
Celeste's room for discovery, objections, proof packets, and close control.
Client OS
Adrian's room for onboarding, document requests, escalations, and renewal rhythm.
Proof Export
Victor's room for release receipts, claims sheets, link audits, and evidence.
Select a brain
Loading local brain profiles...
Ask the active brain
This uses local retrieval and cabinet scoping. The answer stays inside the selected executive's lane unless you choose the Central Company Command Brain.
Retrieved proof sources
Operating truth
These are not vague chatbot skins. They are 16 scoped operating brains: Site Operator for routing, 0meg4kAI for security and tenant-safe review, Central Command for cross-company questions, and 13 cabinet executive brains for the functional lanes. The runtime stays deployable as a static package while leaving a path to plug in Ollama, llama.cpp, or another OpenAI-compatible local endpoint later.