Developer Guide

How OpenClaw Memory and Context Work: A Developer's Guide

OpenClaw agents remember context across sessions using memory files. Here's how memory works, how to configure it, and how to build agents that improve over time.

March 29, 2026 · 9 min read

One of the most common questions from developers setting up OpenClaw for the first time: "How does my agent actually remember things?" The answer is both simpler and more powerful than most expect. OpenClaw doesn't rely on a hidden database or proprietary memory API. It uses files — structured, human-readable files that the agent reads at startup and writes to throughout its session.

This architecture makes OpenClaw memory transparent, debuggable, and completely under your control. It also means you can build agents that genuinely improve over time, not just agents that pretend to remember.


How Agent Memory Works

Every OpenClaw agent starts fresh when a new session begins. The underlying language model has no persistent state between conversations — that's just how LLMs work. What OpenClaw provides is the infrastructure to give agents artificial persistence through file-based context injection.

At session start, OpenClaw reads a set of configured files and injects their contents into the agent's system prompt or context window. The agent "knows" what's in those files because the text is literally present in its context. When the session ends, the agent can write updates back to those files. The next session reads those updates, and the cycle continues.

This is what makes OpenClaw memory fundamentally different from systems that store embeddings in a vector database and retrieve them by similarity. OpenClaw memory is:

  • Deterministic: You know exactly what the agent knows because you can read the file
  • Editable: You can manually correct or update memories at any time
  • Portable: Move your memory files, move your agent's knowledge
  • Auditable: Every memory the agent has written is in plain text you can inspect

The tradeoff: you're working within the context window. Large memory files that exceed the window get truncated. OpenClaw handles this gracefully, but it's a real constraint to design around.


Types of Memory: Session vs Long-Term

OpenClaw memory falls into two categories with different purposes and different management strategies.

Session memory is what the agent accumulates within a single conversation. This is the normal conversation history — every message, every tool call, every response. It exists until the session ends. For short tasks, session memory is all you need.

Long-term memory is what survives session boundaries. This lives in files. OpenClaw provides several patterns for long-term memory:

Daily logs — files named by date (e.g., memory/2026-03-29.md) that the agent writes to throughout a session. Think of these as a work journal. What did the agent do today? What broke? What was decided? These accumulate over time and can be reviewed manually or summarized by the agent itself.

Curated memory — a single file (typically MEMORY.md) that the agent maintains as a distilled knowledge base. Unlike daily logs that grow indefinitely, curated memory gets actively managed — the agent reviews its logs, extracts what matters, and updates this file. This is the agent's "long-term memory" in the human sense.

Learnings files — per-agent files that capture task-specific patterns, what works, what doesn't, recurring errors. For teams running multiple specialized agents, each agent maintains its own LEARNINGS.md.

Shared intelligence — files that multiple agents can read, used to share knowledge across the team. When one agent discovers something important, it writes to a shared file, and other agents pick it up on their next session start.


Configuring Memory Files

OpenClaw memory configuration lives in your AGENTS.md file — the master instruction document that every agent reads at startup. Within AGENTS.md, you define which files agents should load, when, and how.

The standard memory configuration pattern:

## Session Startup

Before doing anything else:
1. Read SOUL.md — this is who you are
2. Read USER.md — this is who you're helping  
3. Read memory/YYYY-MM-DD.md (today + yesterday)
4. Read MEMORY.md (in main sessions only)
5. Read intelligence/shared-learnings.md
6. Read agents/[your-name]/LEARNINGS.md

This pattern gives agents four layers of context:

  1. Identity: Who the agent is, its operating principles, its role
  2. User context: Who it's working with, preferences, history
  3. Recent events: What happened in the last 1-2 days
  4. Accumulated knowledge: Patterns and lessons from the agent's full history

The key configuration decision: which files load in which contexts. MEMORY.md — which may contain sensitive personal data — should only load in authenticated main sessions, not in public or multi-user contexts. AGENTS.md lets you specify this explicitly.


Context Window Management

OpenClaw memory files inject content into the context window. Context windows are finite. This tension is the central design challenge in OpenClaw context management.

Practical guidelines:

Keep MEMORY.md lean. This file gets loaded every session. If it grows to 50,000 tokens, you're burning most of your context budget before the agent does any work. Aim for under 5,000 tokens. Force the agent to regularly prune stale information.

Use daily logs for raw data, MEMORY.md for distilled insights. Daily logs can grow large because they're date-partitioned — you only load the last 1-2 days. MEMORY.md is always fully loaded, so it must stay compact.

Set explicit maxChars limits. OpenClaw supports per-file character limits in the bootstrap configuration. Use them. A LEARNINGS.md file that grows to 100KB will eventually cause problems.

Design for truncation. AGENTS.md mentions that files can be truncated when they exceed limits. Write your memory files so that the most important information appears first — critical rules and recent events at the top, historical context at the bottom.

For OpenClaw context management, the goal is signal density. Every token in the context window should be earning its place.


Building Agents That Learn

The most powerful OpenClaw memory pattern is the self-improving agent loop. Here's how it works in practice:

Step 1: Capture during sessions. The agent writes to its daily log file whenever it makes a significant decision, encounters an error, or learns something. Not at session end — immediately, while the context is fresh. Daily logs that only get written at 2am contain half the information of logs written throughout the day.

Step 2: Consolidate periodically. On a schedule (weekly works well), the agent reviews its recent daily logs and updates its curated MEMORY.md and LEARNINGS.md with patterns worth keeping. This is the "1% improvement" loop — every consolidation run should leave the agent slightly more capable.

Step 3: Apply on startup. At the next session, the agent reads its updated files and starts with the benefit of everything it learned. No human intervention required.

The result: an agent that gets meaningfully better over weeks and months. Not because the model changed, but because the knowledge it operates from gets progressively more refined.

This pattern is what separates a stateless AI assistant from an actual AI employee.


Common Memory Patterns

Patterns that work well in production OpenClaw deployments:

The SOUL pattern: A SOUL.md file defines the agent's identity, operating principles, and decision-making framework. This is read every session and never modified by the agent. It's the stable foundation everything else builds on.

The specialist pattern: Each agent in a multi-agent team has its own directory under agents/ with its own SOUL.md, LEARNINGS.md, and intelligence folder. Agents share context through a common intelligence/shared-learnings.md but maintain private specialist knowledge.

The heartbeat pattern: A HEARTBEAT.md file contains a short checklist of things to check on recurring basis — emails, calendars, system health, project status. The agent reads this file on each heartbeat poll and acts on what it finds. Keeps agents proactive without requiring explicit prompting.

The quality gate pattern: AGENTS.md contains explicit quality standards — banned phrases, required elements, review checklists. Agents read these on startup and apply them to every output. No separate QA prompt needed; the standards are baked into the agent's operating context.


Debugging Memory Issues

When an agent "forgets" something it should know, the debugging process is straightforward because memory is just files:

Check what was actually written. Open the relevant memory file and look for the entry. Did the agent write it? Is it in the right file? Did it end up in a daily log that's no longer being loaded?

Check the bootstrap injection. Is the file listed in AGENTS.md as a file to load? Check the session startup instructions. If a file isn't in the startup sequence, it won't be loaded.

Check for truncation. If a file is very large, it may be getting truncated before the relevant information. Use the bootstrapMaxChars setting to increase limits, or move important information to the top of the file.

Check timing. Did the agent write the memory after the relevant event, or was the session terminated before it had a chance to write? AGENTS.md should be explicit: write to the daily log immediately after significant events, not at session end.

Inspect the raw injection. OpenClaw can show you exactly what was injected into the agent's context at startup. Use this to verify that memory files are being loaded with the right content.

The golden rule: if it's not in a file that gets loaded, the agent doesn't know it. Debugging is always a matter of tracing what files are loaded, what they contain, and whether they're being written to correctly.


MrDelegate runs on OpenClaw with battle-tested memory patterns honed across hundreds of agent sessions. See our managed hosting plans →