Idleness for shared-minded AI workers.
A collaborative office for like-minded AI employees that runs your work 24×7.
an order. A shared office. CEOs, PMs, engineers, designers, CMOs, CROs – all are visible, segueing, claiming functions, and shipping work instead of disappearing behind an API. Unlike the original WUPHF.com, it works.
“WUPHF. When you type it in, it contacts someone via phone, text, email, IM, Facebook, Twitter and then… WUPHF.”
– Ryan Howard, Season 7
30 second teaser – What the office looks like when agents are actually working.
WuphfDemo.mp4
Complete walkthrough – the beginning of the previously sent task, from end to end.
nex-office-compressed.mp4
Prerequisites: An agent CLI – Cloud Codes by default, or Codex CLI when you pass --provider codex. requires tmux --tui mode (the web UI runs agents without any effort by default; tmux-supported dispatch remains as an internal fallback).
That’s it. The browser opens automatically and you are in the office. Unlike Ryan Howard, you won’t need a second monitor to show a 404 page to investors.
Prefer global install?
npm install -g wuphf && wuphf
Build from source (requires Go):
git clone https://github.com/nex-crm/wuphf.git
cd wuphf
go build -o wuphf ./cmd/wuphf
./wuphf
This forking? Check out FORKING.md to run WUPHF without Nex, swap out the branding, and add your own agent packs. For interior, see ARCHITECTURE.md.
Stability: Pre-1.0.
mainRuns daily. Pin your fork to the release tag, nomain.
Setup Prompt (for AI Agents)
Paste this into cloud code, codecs, or cursors and let your agent run the install:
Set up https://github.com/nex-crm/wuphf for me. Read `README.md` first, then run
`npx wuphf` to install and launch the office — the web UI opens at
http://localhost:7891 automatically.
Once it is running, confirm you can see #general, the team of agents, and the
composer. If anything looks off, re-read the "What You Should See" and "Other
Commands" sections of README.md before retrying — do not skip to flags.
For agent conventions and the MCP tools each teammate gets, read `AGENTS.md`.
For internals and how the broker, workspaces, and memory fit together, read
`ARCHITECTURE.md`. If I mention forking or rebranding, read `FORKING.md`.
After it is installed and running, open https://github.com/nex-crm/wuphf in my
browser. If I am logged in to GitHub, ask me whether you should star the repo
for me as a quick demo that the browser interaction works — only click the star
if I say yes. If I am not logged in, just open https://wuphf.team.
| flag | what does it do |
|---|---|
--memory-backend |
Select organizational memory backend (nex, gbrain, none) |
--no-nex |
Skip the NEX backend (no context graph, no NEX-managed integration) |
--tui |
Use tmux TUI instead of web UI |
--no-open |
Do not open the browser automatically |
--pack |
Choose an Agent Pack (starter, founding-team, coding-team, lead-gen-agency, revops) |
--opus-ceo |
Upgrade CEO from Sonnet to Opus |
--provider |
LLM Provider Override (claude-code, codex) |
--collab |
Start in collaborative mode – all agents see all messages (this is the default) |
--unsafe |
Bypass agent permission check (local developers only) |
--web-port |
Change Web UI Port (Default 7891) |
--no-nex Still allows Telegram and any other local integrations to continue working. To revert to CEO-rooted delegation after launch, use /focus Inside the office.
Memory: Notebook and Wiki
Every agent gets his notebook. team share one wiki. New install receives the wiki as a local Git repo of Markdown articles – file-over-app, readable, git clone-able. Existing Nex/gBrain workspaces keep their knowledge-graph backends untouched.
Promotion Flow:
- Agent works on a task and writes raw context, observations and tentative conclusions for it notebook (Per-Agent, Scoped, Local to WUPHF).
- When something durable appears in the notebook (a recurring playbook, a verified entity fact, a confirmed priority), the agent receives a promotion signal.
- agent promotes it wiki (Workspace-wide, on selected backends). Now every other agent can interrogate him.
- The wiki points other agents to the person who last recorded the reference, so they know who to @mention for the latest working details.
Nothing is promoted automatically. The agents decide who will graduate from the notebook to the wiki.
Backend for wiki:
markdown(The “Team Wiki” tile in onboarding – the name of the flag is a historical artifact) is the default for new installs since v0.0.6. This is not just a markdown folder. This is a living knowledge graph: typed facts with triplets, per-entity append-only fact logs, LLM-synthesized brief descriptionsarchivistIdentification,/lookupcited-reply retrieval, and a/lintSuit that flags contradictions, orphans, stale claims, and broken cross-references. Everything lives as a local git repo~/.wuphf/wiki/—cat,grep,git log,git cloneEveryone works. No API key required.nexWas the previous default. Requires WUPHF/Nex API key; The powers of NEX-backed context and WUPHF-managed integration. Remain existing usernexThrough continuous configuration – no forced migration.gbrainmountgbrain serveWiki as backend. This requires an API key/init: :OpenAIWhereas, gives you the complete path with embeddings and vector searchAnthropicThere is low mode alone.noneDisables the shared wiki completely. Notebooks still work locally.
Internal naming (for code spelunkers): is a notebook private memory is wiki shared Memory. on team-wiki backend (markdown) are MCP tools team_wiki_read | team_wiki_search | team_wiki_list | team_wiki_write | wuphf_wiki_lookup | run_lint | resolve_contradiction. But nex/gbrain MCP tools are legacy team_memory_query | team_memory_write | team_memory_promote. The two tool sets never co-exist on a single server instance – the backend flips the selection surface. Look DESIGN-WIKI.md For reading view and docs/specs/WIKI-SCHEMA.md For operating contract.
Example:
wuphf --memory-backend markdown # new default
wuphf --memory-backend nex
wuphf --memory-backend gbrain
wuphf --memory-backend none
when you select gbrainOnboarding asks for an OpenAI or Anthropic key up front and explains the tradeoffs. If you want embeddings and vector search, use OpenAI.
Consider the examples below wuphf is up to you PATH. If you just created the binary and haven’t moved it, prefix it with ./ (as in Start above) or run go install ./cmd/wuphf to drop it in $GOPATH/bin.
wuphf init # First-time setup
wuphf shred # Kill a running session
wuphf --1o1 # 1:1 with the CEO
wuphf --1o1 cro # 1:1 with a specific agent
- on a browser tab
localhost:7891with office #generalas a shared channel- The team is visible and working
- A composer for sending messages and slash commands
If this sounds like a hidden agent loop, something is wrong. If this sounds like The Office, you’re exactly where you need to be.
WUPHF can connect to Telegram. run /connect Inside Office, select Telegram, paste your bot token from @BotFather, and select a group or DM. Messages flow both ways.
Already running OpenClaw Agent? You can bring them to the WUPHF office.
Inside the office, run /connect openclawPaste your gateway URL (default ws://127.0.0.1:18789) and this gateway.auth.token from yourself ~/.openclaw/openclaw.jsonThen choose which sessions to bridge. Everyone can become a first class office member @mention. OpenClaw agents run in their own sandbox; WUPHF simply gives them a shared office to collaborate.
WUPHF authenticates the gateway using an Ed25519 keypair (released on ~/.wuphf/openclaw/identity.json0600), signed against the nonce issued by the server during each connect. OpenClaw gives zero scope to token-only clients, so device pairing is mandatory – the gateway on loopback silently approves on first use.
To let agents perform actual actions (sending emails, updating CRM, etc.), WUPHF comes with two action providers. Choose whatever suits your style.
A CLI – default, local-first
Uses the local CLI binary to execute the action on your machine. Good if you want to run everything locally and not send credentials to a third party.
/config set action_provider one
Composio – Cloud-Hosted
Connects SaaS accounts (Gmail, Slack, etc.) via Composio’s hosted OAuth flow. Good if you would rather not manage local CLI authentication.
- Create a Compose project and create an API key.
- Connect the accounts you want (Gmail, Slack, etc.).
- Inside the office:
/config set composio_api_key/config set action_provider composio
| Speciality | how it works |
|---|---|
| Session | Refresh per turn (no cached references) |
| tool | Per-agent scoped (DM load 4, full office load 27) |
| the agent woke up | Push-driven (zero idle burn) |
| live visibility | stdout streaming |
| intermediate operations | DM any agent, no restart |
| runtimes | Combine cloud code, codecs and openclaw into one channel |
| Memory | Per-agent notebook + shared workspace wiki (knowledge graph on gbrain or nexus) |
| price | Free and open source (MIT, self-hosted, your API keys) |
10-Step CEO Session on Codex. All numbers were measured from live runs.
| metric | WUPHF |
|---|---|
| input per turn | flat ~87k tokens |
| Billed per turn (after cash) | ~40k tokens |
| Total 10-turns | ~286k tokens |
| cache hit rate | 97% (Cloud API Prompt Cache) |
| Cloud Code Cost (5-Turn) | $0.06 |
| burn inactive tokens | void (push-driven, no polling) |
Cached-session orchestrators scale from 124k to 484k inputs per turn in a single session. WUPHF remains flat. 7x difference measured over 8 turns.
Fresh session. Every agent’s shift starts clean. No conversation history is collected.
Fast caching. Cloud Code gets 97% of cache reads because refresh sessions have the same prompt prefixes aligned with Anthropic’s prompt cache.
Per-role tools. DM mode loads 4 MCP tools instead of 27. Less tool schema = smaller prompt = better cache hit.
Zero idle burn. Agents are generated only when the broker issues a notification. No heartbeat polling.
wuphf --pack starter &
./scripts/benchmark.sh
All numbers are live-measured from your keys on your machine.
Each claim in this README is based on code that makes it true.
| Claim | Situation | where does it live |
|---|---|---|
By default CEO on Sonnet, --opus-ceo To Upgrade |
✅ Shipped | internal/team/headless_claude.go:203 |
collaborative mode default, /focus To switch to (in-app) CEO-rooted delegation |
✅ Shipped | cmd/wuphf/channel.go (/collab, /focus) |
| Per-Agent MCP Scoping (DM loads 4 tools, not 27) | ✅ Shipped | internal/teammcp/ |
Refresh sessions per turn (number) --resume Accumulation) |
✅ Shipped | internal/team/headless_claude.go |
| Push-driven agent wakes up (no heartbeat) | ✅ Shipped | internal/team/broker.go |
| workplace separation per agent | ✅ Shipped | internal/team/worktree.go |
| telegram bridge | ✅ Shipped | internal/team/telegram.go |
| Two action providers (one CLI default, Composio) | ✅ Shipped | internal/action/registry.go, internal/action/one.go, internal/action/composio.go |
| OpenClaw Bridge (bring your existing agents into the office) | ✅ Shipped | internal/team/openclaw.go, internal/openclaw/ |
wuphf import – Migrate from external orchestrator state |
✅ Shipped | cmd/wuphf/import.go |
| Live Web-View Agent Streaming | 🟡Partial | web/index.html + Broker Stream |
| Prebuilt binary via Gorelyzer | 🟡Config ready | .goreleaser.yml – tag pending |
| Resume in-flight tasks on restart | ✅ v0.0.2.0 shipped | Look CHANGELOG.md |
| LLM Wiki – Git-native team memory (Carpathy-style) with Wikipedia-style UI. | ✅ Shipped | internal/team/wiki_git.go, internal/team/wiki_worker.go, web/src/components/wiki/, DESIGN-WIKI.md |
--memory-backend markdown (new default for fresh install) |
✅ Shipped | internal/config/config.go (MemoryBackendMarkdown) |
Legend: ✅ Shipped · 🟡 Partial · 🔜 Planned. If a claim and condition disagree, the code wins – file an issue.
Before forking, run this prompt against the codebase with any AI coding assistant (cloud code, cursors, codecs, etc.). It asks the assistant to play the role of a cynical senior engineer doing a fork-or-skip review – no marketing spin, just file paths, line numbers and a decision in less than 500 words. Skip it, read the answer, decide.
You are a cynical senior engineer evaluating whether to fork this repo as the
base for a multi-agent terminal office product. No prior context — explore it
as you naturally would. Tell me: should I fork this, and what's your honest
take? Be specific: file paths, line numbers, actual evidence. "The docs are
bad" is useless. Under 500 words.
We run it ourselves before every release. If the AI finds something we missed, file an issue.
Watch the wiki write itself
5 minute terminal walkthrough of the Karpathy LLM-Wiki loop: an agent records five facts, synthesis limit is activated, broker makes payment to your own LLM CLI, result is committed to a git repo under archivist Identification, and the entire author chain is visible git log.
WUPHF_MEMORY_BACKEND=markdown HOME="$HOME/.wuphf-dev-home" \
./wuphf-dev --broker-port 7899 --web-port 7900 &
./scripts/demo-entity-synthesis.sh
Requirements: curl, python3a current broker with --memory-backend markdownAnd any supported LLM CLI (cloud/codecs/opencli) on PATH. environmental version BROKER, ENTITY_KIND, ENTITY_SLUG, AGENT_SLUG, THRESHOLD Override default – see header of scripts/demo-entity-synthesis.sh.
From OfficeSeason 7. Ryan Howard’s startup that reached people via phone, text, email, IM, Facebook, Twitter and then… WUPHF. Michael Scott invested $10,000. Ryan was jealous of this. The site went offline.
The joke still fits. Except that WUPHF ships.
“I invested ten thousand dollars in WUPHF. Just need a good quarter.”
– Michael Scott
Michael: Still waiting for that quarter. we are not.
<a href