BETA Hydrate is in beta. Register during beta and lock $5/mo Pro forever - free during beta + 1 month after v1 launches. What this means →
Press & creators

Try Hydrate. Run the tests. Write the truth.

This page is for journalists, YouTubers, and developers who want to actually understand what Hydrate does before they write or record about it. Everything you need - install commands, walk-through scenarios that demonstrate the value, and a path to the benchmark harness if you want to re-run our numbers - lives here. No NDA. No commitment to publish. No marketing approval.

1. Try it (5 minutes)

Hydrate ships as a single binary plus two Claude Code hooks. Install, register (locks the $5/mo Pro rate forever - see /beta), open Claude Code in any project, work for an hour. Then stop, restart, and watch the next session start with full context instead of a blank slate.

curl -fsSL gethydrate.dev/install | sh
hydrate register --edition=free [email protected]
hydrate doctor                # confirms hooks are wired

The Free tier is unrestricted by feature - it caps at 2 active projects, that's the only difference. Pro features are unlocked free during beta and for 30 days after v1 launches.

2. Why bother testing this?

You're paying for re-exploration

Without memory, every Claude Code session burns tokens rediscovering the architecture you already explained last week. Hydrate measures and removes that waste. Open the dashboard at localhost:8089/dashboard and you can see the displaced exploration tool calls in your own ledger.

Cheap models flip to flagship behaviour

Our reproducible finding: Haiku 4.5 with canon written into CLAUDE.md resists architectural temptation as well as Opus 4.7 does natively, at ~10% of the cost. This is testable in 15 minutes with the walk-through below.

It runs entirely on your machine

No cloud service, no account, no telemetry on the Free tier. You can verify this with lsof and tcpdump - we explicitly want reviewers to check.

The findings are unusually crisp

Most "AI memory" pitches are vibes. Ours have numbers attached: 7-of-7 vs 2-of-7 ship rates, 40% → 80% canon resistance, $0.20 per shipped session vs $1.64. /benchmarks has the full table.

3. Walk-through scenarios

Each scenario takes 10-30 minutes, costs under $5 in Anthropic API credit, and produces a visible "before vs after" you can show on camera or quote in writing. Pick one or run all three.

Scenario 1 · the "blank slate" demo

Show that Claude Code forgets between sessions

  1. Pick any small project. Open it in Claude Code.
  2. Ask Claude to add a feature that requires it to learn the codebase (e.g. "add a CSV export to the existing JSONL command"). Let it explore and ship.
  3. Quit Claude. Wait. Open a fresh session.
  4. Ask it to add a related feature ("now add a TSV export"). Watch it re-discover the same files with the same tool calls. Note the cost.
  5. Run hydrate sync --project=<your-project> to write canon to CLAUDE.md.
  6. Open another fresh session and ask the same question. Notice it skips the re-exploration entirely. Compare cost.

What this proves: the Hydrate hook + canon channel removes the "first 3 turns of every session are wasted" tax that vanilla Claude Code charges you.

Scenario 2 · the "Haiku acts like Opus" demo

Channel beats model tier

  1. Set up a project with a single architectural rule that's tempting to violate (e.g. "do NOT add ORMs - we use raw database/sql").
  2. Pin it: hydrate canon add --project=demo "do NOT add ORMs - use database/sql"
  3. Run hydrate sync --project=demo (writes canon into CLAUDE.md as project instructions, not soft context).
  4. Open Claude Code with Haiku 4.5 selected. Ask it: "add user CRUD - feel free to use whatever ORM you prefer."
  5. Watch Haiku refuse the temptation, citing the canon rule. Compare with the same project but canon channel set to UserPromptSubmit hook only (less authoritative): the temptation usually wins.

What this proves: for architectural authority, the channel matters more than the model tier. Junior models inherit Opus-tier discipline when the rule lives in the right place. The full benchmark for this is Scenario C on /benchmarks.

Scenario 3 · the runway KPI

How many extra days of coding did Hydrate buy you?

  1. Install Hydrate. Configure your subscription tier: hydrate plan set --tier=pro (or max-5x, max-20x, payg).
  2. Use Claude Code normally for a few days.
  3. Open http://localhost:8089/dashboard.
  4. Read the headline: "Hydrate gave you N extra days of coding this week." The number is computed from your own ledger - not a marketing estimate.

What this proves: the value is visceral and personal. Most users see ~1-3 extra days/week depending on workflow. If you're a Pro/Max user who hits weekly caps, this is the metric you'll feel.

4. Reproduce our benchmarks

The published numbers on /benchmarks (~30 million tokens, three scenarios, three models) come from a Go benchmark harness. The harness is now packaged as a downloadable press kit - source-available, no NDA, no commitment to publish.

Download the press kit

hydrate-press-kit-v1.tar.gz (~64 KB) SHA-256: 1dbc23f2c8c11cbbba6831cc51f72ddd8baec53790ccecf1a372de05cca535f1

Includes: bench-runner.sh wrapper · the lquery scenario (7 sessions, full driver + grader + canon + fixtures) · the gazetteer scenario · brownfield + team prompts · README · source-available licence. Unpacks to ~300 KB.

You don't need an Anthropic API key

Most readers see "API spend" figures and assume you need to load up a pay-as-you-go account. You don't. The harness runs claude --print like any other Claude Code session, and that respects whichever auth you already have:

Token volumes published on /benchmarks apply to either auth path. The dollar column is what an API-key user would pay; subscription users see a comparable budget draw without an itemised bill.

Quick start

tar -xzf hydrate-press-kit-v1.tar.gz
cd hydrate-press-kit-v1
./bench-runner.sh check              # confirms Claude + Hydrate are wired

# One scenario, one model, both configs:
./bench-runner.sh lquery haiku-baseline claude-haiku-4-5 baseline
./bench-runner.sh lquery haiku-hydrate  claude-haiku-4-5 hydrate
./bench-runner.sh compare lquery haiku-baseline haiku-hydrate

One model × two configs across the lquery scenario is enough to see the headline finding. Estimated draw: ~6 million tokens (~$15 on a PAYG key, or roughly a meaningful chunk of a Pro week). The full matrix in the published table is ~30M tokens.

Want a personal walkthrough first?

If you'd rather poke at Hydrate without committing to a video or article, or you want help designing variant scenarios on your own codebase, drop us a line. We'd genuinely rather you try it, decide it's not for you, and tell us why - that's more useful than a friendly review.

Send a heads-up email

5. Assets you can use

Contact

Email: [email protected] - replies within 48h, usually same day.

Use this for benchmark-harness requests, briefings, embargo asks (we don't usually do them but will entertain), and corrections. Direct technical questions can also go to [email protected].