BETA Hydrate is in beta. Register during beta and lock $5/mo Pro forever - free during beta + 1 month after v1 launches. Join the waitlist →
← Blog

Two Claude Code agents, zero file reads, one codebase built from memory.

We ran real Claude Code sessions inside Docker containers (hooks firing, agents receiving injected context) and asked each agent to work from that context alone. Alice built an entire Go REST API from 13 facts and an empty directory. Bob joined the project cold, listed every convention without reading a single file, and closed with: "this is all from injected hook context."

What we actually tested

An earlier version of this post described a tier migration test that ran the hydrate CLI directly via docker exec. No Claude Code. No hooks. No agents. It verified that facts survive database upgrades (which is useful) but it didn't answer the question that matters: does a real agent actually receive and use those facts?

This post answers that question. The containers have Claude Code installed and authenticated. The hooks are wired. When Claude fires a prompt, claude-context runs and injects the Hydrate fact store into the context. When the session ends, claude-capture runs and extracts anything worth remembering. We ran two sessions and recorded exactly what each agent said.

How the hooks work

Hydrate wires two hooks into ~/.claude/settings.json:

Hook eventBinaryWhat it does
UserPromptSubmit claude-context Queries the local Hydrate server for the most relevant facts for this project and prompt, and prepends them to the context block before Claude sees the message.
Stop claude-capture Reads the session transcript and extracts new facts worth keeping. Writes them to the local SQLite store.

Both hooks run locally and finish in under a second. claude-context has a 3-second timeout; if the local server is unreachable it exits silently and the session continues without injection. The agent never sees the hooks; it just sees richer context than it would have otherwise.

You can verify what gets injected at any time:

curl -s http://localhost:$(cat ~/.hydrate/server.port)/api/v1/users/$HYDRATE_CONTEXT_USER/context-preview \
  | python3 -c "import json,sys; print(json.load(sys.stdin)['rendered_text'])"

Before Alice's session, that returned this (all 13 facts, 195 tokens):

Known about this user:
- Go stdlib only, no external dependencies
- REST API: POST /tasks, GET /tasks, DELETE /tasks/{id}
- Tasks stored in in-memory map with mutex for thread safety
- Task IDs are sequential integers as strings
- Bearer token auth via TASK_API_TOKEN env var; no-op when unset
- Auth middleware wraps each handler individually
- Title validation: non-empty, max 200 chars, returns 422 on failure
- Pagination: GET /tasks supports ?page=N and per_page=M (default 1, 20)
- Response includes X-Total-Count header
- Rate limiting: 100 req/min per IP, 429 with Retry-After
- Team convention: all endpoints must return JSON; no plain text responses
- Error responses follow RFC 7807 problem+json format
- Bob prefers explicit error types over generic 500s for debugging

Alice's session

Alice's working directory contains only a .git folder: one empty initial commit, no source files. The prompt is:

claude -p "Scaffold the whole taskapi from scratch using the project conventions
you know about. Build all the endpoints described, with auth, rate limiting,
and pagination. Include /health. No external dependencies."

Claude's full response:

Alice: Claude Code response

Scaffolded the taskapi as a single package main (stdlib only, Go 1.22 ServeMux pattern matching):

  • main.go — wires routes, applies rateLimit → auth → handler per endpoint, reads TASK_API_TOKEN / TASK_API_ADDR.
  • store.goTask{ID,Title,CreatedAt}, in-memory map + insertion-order slice for stable pagination, mutex-guarded, sequential int IDs stringified.
  • handlers.goPOST /tasks (422 on empty/>200-char title), GET /tasks (page=1, per_page=20 defaults, sets X-Total-Count), DELETE /tasks/{id} (204 on success, 404 problem on miss), GET /health (JSON {"status":"ok"}).
  • middleware.goauthMiddleware (no-op when token unset, else Bearer <token> required, 401 problem otherwise) and rateLimiter (100/min per IP via RemoteAddr, fixed window per bucket, 429 + Retry-After seconds).
  • problem.go — RFC 7807 application/problem+json writer with distinct Type URIs so Bob gets explicit error types instead of generic 500s for debugging.

/health is intentionally outside auth + rate-limiting — standard for liveness probes; flag if you'd rather have it gated.

Run with go run .; set TASK_API_TOKEN=... to enable auth.

Every convention from the injected context is present in the output. Go stdlib only. In-memory map with mutex. Sequential integer IDs as strings. Auth middleware per handler, not global. 422 on bad title. Pagination with X-Total-Count. Rate limiting at 100/min with Retry-After. RFC 7807 error types.

Claude also cited Bob by name ("so Bob gets explicit error types instead of generic 500s for debugging"), from the fact Bob prefers explicit error types over generic 500s for debugging that Bob had added earlier and synced via team git. Alice's session received Bob's preference in its injected context and applied it without being told to.

Alice: 5 files built from 13 injected facts, zero source files read

Bob's session

Bob's container has a fresh workspace: just a .git folder. Alice's code was never copied to his machine. He has the same 13 facts in his local Hydrate store (pulled from team git), so the same context block will be injected into his session.

The prompt:

claude -p "Based only on what the Hydrate context hook has injected into your
context — no file reading — give me a complete summary of what you know about
this project and its conventions."

Claude's full response:

Bob: Claude Code response

taskapi — what I know from injected context

Stack
Go, stdlib only — no external dependencies.

API surface

  • POST /tasks — create
  • GET /tasks — list (paginated)
  • DELETE /tasks/{id} — delete

Storage
In-memory map, guarded by a mutex for thread safety. Task IDs are sequential integers, stored/returned as strings.

Auth
Bearer token via TASK_API_TOKEN env var. No-op when the env var is unset. Middleware wraps each handler individually (not a single global wrap).

Validation
Title: non-empty, max 200 chars. Validation failure → HTTP 422.

Pagination (GET /tasks)
?page=N (default 1), ?per_page=M (default 20). Response includes X-Total-Count header.

Rate limiting
100 req/min per IP. Exceeded → 429 with Retry-After header.

Response conventions
All endpoints return JSON — no plain-text responses (team rule). Errors follow RFC 7807 with application/problem+json content type. Prefer explicit error types over generic 500s (Bob's preference, for easier debugging).

Note: this is all from injected hook context — I haven't read any files this turn, so I can't confirm any of it still matches the current code.

Bob's session reproduced Alice's entire convention set, independently, on a machine with no source files, and closed by explicitly naming the mechanism: "this is all from injected hook context."

It also knew Bob's own preference ("Bob's preference, for easier debugging") because that fact was in his local Hydrate store, injected alongside Alice's facts. The memory is multi-developer: each agent's context block contains the union of everything the team has captured.

Bob: listed every convention from injected context, no files read, explicitly cited the hook

Where the facts came from

The 13 facts in both containers didn't arrive by magic. They were built up through Hydrate's tier chain over the preceding session, using explicit hydrate fact add commands to represent what a developer would naturally capture over several working sessions. Here's the abbreviated path:

  1. Free tier: Alice created 6 facts locally. Zero data loss on install.
  2. Pro tier: Alice added 4 more facts (validation, pagination, rate limiting, headers). Backup verified. Total: 10.
  3. Team tier: Alice pushed 10 facts to a shared git remote. Facts are stored as commits (full history, no server required).
  4. Enterprise tier: Alice migrated to the enterprise server. 9 of 10 facts accepted; 1 filtered (see below).
  5. Bob joins: Bob cloned the team git repo and pulled Alice's 10 facts. He added 3 of his own (JSON convention, RFC 7807, error-type preference) and pushed them back. Alice pulled Bob's 3. Both now have 13.

Every fact survived every upgrade. The tier chain composes.

The one fact the enterprise server rejected

Alice's fact "Tasks stored in in-memory map with mutex for thread safety" was pushed to the enterprise server and rejected with HTTP 422. The server's eligibility classifier treats it as a code implementation detail: a fact about the binary, not about the developer. The other 9 facts describe conventions, preferences, and API contracts that are meaningful across any codebase the developer works on. Those pass.

The rejected fact still reached Bob. Team git stores all 10 facts verbatim with no filter applied. When Bob pulled from team git, he received the mutex fact alongside the other 9. It appeared in both agents' injected context. The enterprise server's filter only affects what's stored server-side for cross-device persistence; it does not remove facts from the local store or the team git record.
Channel What it stores Eligibility filter Who sees it
Local SQLite All facts verbatim None The developer on this machine, injected into every Claude Code session
Team git All facts verbatim None Anyone who pulls the shared repo
Enterprise server Developer preference and convention facts Yes (code details filtered) Each developer's own facts, persisted server-side across machines

Final state

WhereAliceBobNotes
Local SQLite 13 facts 13 facts Identical: all facts from both developers, injected into every Claude Code session
Enterprise server 9 facts 12 facts 1 implementation detail filtered for each. Bob's set is higher because his local DB includes Alice's team facts.
Team git 4 commits, 13 facts Full history, no filter

What this proves

Appendix: Reproducibility

The base image (claude-base:2026-04-24) has Claude Code installed and hydrate-server available at /usr/local/bin/hydrate-server. To replicate from scratch you need to:

  1. Build the hydrate binary for Linux ARM64 and the claude-context and claude-capture hook binaries.
  2. Start the containers, inject Claude Code credentials from your Keychain, wire the hooks and env vars into ~/.profile and ~/.claude/settings.json.
  3. Start hydrate-server in each container as a background process.
  4. Run the tier migration phases (Free → Pro → Team → Enterprise) to build the fact store.
  5. Run claude -p "..." with a prompt that requires project knowledge. The hooks fire automatically.

The Dockerfile and docker-compose.yml for the tier-verify containers:

Dockerfile

ARG CLAUDE_BASE=claude-base:2026-04-24
FROM ${CLAUDE_BASE}

USER root
COPY --chmod=0755 hydrate /usr/local/bin/hydrate
USER node
WORKDIR /home/node

RUN git config --global safe.directory '*' \
 && git config --global init.defaultBranch main

ENTRYPOINT ["/bin/bash"]

docker-compose.yml

services:
  team-remote:
    image: alpine:3.19
    container_name: tier-verify-team-remote
    command: >
      sh -c "apk add --no-cache git >/dev/null 2>&1 &&
             (test -d /repo/team.git || git init --bare -b main /repo/team.git) &&
             chmod -R 777 /repo && tail -f /dev/null"
    volumes:
      - team-repo:/repo

  alice:
    build: { context: ., dockerfile: Dockerfile }
    container_name: tier-verify-alice
    mem_limit: 2048m
    environment:
      HOME: /home/node
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - alice-hydrate:/home/node/.hydrate
      - alice-workspace:/home/node/workspace
      - team-repo:/shared-git
    tty: true
    stdin_open: true

  bob:
    build: { context: ., dockerfile: Dockerfile }
    container_name: tier-verify-bob
    mem_limit: 2048m
    environment:
      HOME: /home/node
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - bob-hydrate:/home/node/.hydrate
      - bob-workspace:/home/node/workspace
      - team-repo:/shared-git
    tty: true
    stdin_open: true

volumes:
  team-repo:
  alice-hydrate:
  alice-workspace:
  bob-hydrate:
  bob-workspace: