Launching soon — join the early access list

The memory layer
your agents don't have yet.

Persistent context, searchable history, entity tracking, and decision recall — across every session. One API. Two function calls. Ship agents that remember.

agent.py
from codeclaw import Memory

mem = Memory(api_key="cc_live_...")

# After each agent turn → store what matters
mem.store(
    user="u_0x7a3",
    content="Budget approved at $40k. Chose Vendor B. Prefers async updates.",
    entities=["project_atlas", "vendor_b"]
)

# Before each turn → recall what's relevant
ctx = mem.recall(query="What vendor did they choose?", user="u_0x7a3")
# → "Chose Vendor B." (4 sessions ago · confidence: 0.97)

Works with your stack

OpenAI Anthropic LangChain OpenClaw LlamaIndex CrewAI AutoGen
The problem

Agents have no memory. You keep working around it.

You've built the reasoning, the tool calls, the chains. But your agent still forgets who it's talking to after every session. You patch it with JSON dumps, oversized system prompts, and raw vector queries. It doesn't scale. It breaks in production.

🧹

Context dies at session end

The user explained their project scope on Monday. By Wednesday the agent asks again. Users lose patience. They stop using it — not because the reasoning is bad, but because the memory is non-existent.

🕳️

No trace of what the agent knew

It recommended Plan A last week, Plan B today. What changed? What context did it use? There's no audit trail. You can't debug a decision you can't inspect. Production agents need accountability.

🏗️

Building memory is a tarpit

You've tried: vector DB + embedding pipeline + retrieval logic + per-user scoping + TTL policies. Three weeks in and it still surfaces stale context. This is infrastructure work — and it's not your product.

The solution

Memory infrastructure. Not another vector DB.

CodeClaw sits between your agent and durable recall. It handles storage, retrieval, scoping, entity linking, and temporal ranking. You call two functions. Your agent remembers everything.

🔄

Persistent across sessions

Memory survives session boundaries, restarts, and deploys. Your agent picks up where the user left off — days or months later. No context window tricks.

🔍

Semantic retrieval

Ranked by meaning, recency, and confidence — not keyword match. The right context at the right time. Nothing stale. Nothing irrelevant.

🏷️

Entity graph

Memories link to people, projects, accounts, and decisions automatically. Query by entity: "What does the agent know about Project Atlas?" Get a structured answer.

📋

Decision audit trail

Every decision logged with source context and timestamp. When someone asks "why did it say that?" — you have the answer. Inspectable in the dashboard or via API.

🔐

Scoped by default

Memory isolated per user, workspace, and tenant. Architecturally enforced — not query-filtered. Zero cross-contamination from the first API call.

<50ms retrieval

REST API. Python and JS SDKs. Two calls: store() and recall(). p95 under 50ms. Works with any model, any framework, any agent architecture.

How it works

Two function calls. One afternoon to integrate.

No infrastructure to provision. No embedding pipeline to build. Install the SDK, add store() and recall(), deploy. Your agent has memory.

1

Install

pip install codeclaw
One package. Zero config.

2

Store

After each turn, call mem.store() with user, content, entities.

3

Recall

Before each turn, call mem.recall() to inject relevant history.

4

Inspect

Use the dashboard to view, search, and debug every stored memory.

Your Agent

Any LLM · Any framework

CodeClaw API

store() · recall() · search()

Memory Store

Indexed · Scoped · Persistent

Your agent calls the API. CodeClaw handles indexing, retrieval, scoping, and ranking. You inspect via dashboard.

Core capabilities

Everything agents need for durable recall.

🧠

Long-term memory

Recall from days, weeks, or months ago

🔎

Semantic search

Find any memory by meaning

🔗

Entity tracking

People, projects, decisions — linked

📊

Decision logs

Full audit trail for every action

🛡️

Tenant isolation

Scoped per user by default

Dashboard

See what your agent remembers.

A lightweight dashboard for visibility and debugging. Search memories, inspect entity graphs, view decision timelines, and trace what context was used in any recall. Built for developers who need to understand what their agent knows.

🔍

Search all memories

Filter by user, entity, time range, or content. Find exactly what the agent stored and when. Semantic and keyword search.

🏷️

Entity inspector

See every memory linked to a specific person, project, or account. Understand what your agent knows about each entity across sessions.

📋

Decision timeline

Trace how agent decisions evolved over time. View what context was retrieved, what was stored, and what changed between sessions.

Use cases

Built for agents in production.

If your agent talks to the same user more than once, it needs memory. These are real scenarios where teams integrate CodeClaw today.

Customer-facing agents

Support that remembers the full conversation

A user says "I already explained this." Your agent pulls up the prior thread, the resolution, and the pending follow-up. No re-explaining. Resolution time drops. CSAT goes up.

Internal workflow agents

Automations that learn from past runs

Your ops agent remembers that deploys to staging fail on Fridays. Your procurement agent recalls the client prefers net-60 terms. Agents that have seen the pattern act on it.

Founder / personal copilots

An assistant that gets better every week

Remembers your OKRs, your last investor call, that you moved launch to Q2, that you hired for the design role. Every conversation builds on the last because nothing gets lost.

Early access

Launching soon. Get in early.

We're onboarding the first cohort of builders. Leave your email — you'll get early API access, free usage during beta, and a direct line to the team.

Join the waitlist

Be the first to give your agents memory.

✓  You're on the list. We'll be in touch soon.

No spam. Early access + product updates only.

⚡ Early API access

First in line when we open the API. Start building before everyone else.

🎁 Free during beta

Full access at no cost while we're in beta. No credit card needed.

💬 Direct feedback line

Shape the product. Early builders get a direct channel to the team.

FAQ

Questions builders ask

How is this different from a vector database? +
A vector DB gives you similarity search on embeddings. One layer of a much larger problem. CodeClaw handles everything above it: memory scoping per user and session, entity resolution and linking, temporal ranking so recent context outweighs stale data, decision logging with source attribution, and retrieval logic that determines when to surface memory — not just what matches a query. You call two functions. We handle the rest.
What models and frameworks does it support? +
All of them. CodeClaw is model-agnostic. It works with GPT-4o, Claude, Llama, Mistral, Gemini — whatever you're running. It integrates directly with LangChain, LlamaIndex, OpenClaw, CrewAI, and AutoGen, or you can use the REST API and Python/JS SDKs with any custom agent architecture.
How fast is retrieval? +
p95 under 50ms. Memories are indexed at write time for semantic search, so recall() is fast even with hundreds of thousands of stored memories. No cold start. No batch processing. Context is available the moment your agent needs it.
How long does integration take? +
Most teams go from install to production in an afternoon. pip install codeclaw, add mem.store() after each turn, add mem.recall() before each turn. No embedding pipeline. No retrieval tuning. No infrastructure to manage.
Is memory isolated between users? +
Yes. Architecturally enforced, not just filtered at query time. Every memory is scoped to a user and workspace with zero cross-tenant access. Business plans add custom data residency and VPC options for teams that require full data sovereignty.
What is the dashboard for? +
The dashboard is for visibility and debugging — not running your agents. Use it to search stored memories, inspect entity relationships, view decision timelines, and trace what context was retrieved on any given recall. It's a debugging tool, not a control panel. The API is the product.
Do you use our data to train models? +
No. Your memory data is yours. We never use it for training, fine-tuning, benchmarking, or any purpose beyond serving it back to your agents. All data encrypted at rest and in transit. Security documentation available on request.
Can I self-host? +
Not yet. CodeClaw is currently hosted. Business plans include VPC deployment and custom data residency. If you have strict requirements around data location, reach out — we'll work with you on a deployment model that fits.

Your agent is one integration away
from remembering everything.

We're launching soon. Get early access and start building before everyone else.