OpenClaw LCM Setup Guide
How to set up Lossless Context Management (LCM) on OpenClaw. Martian Engineering's plugin that prevents context compaction from destroying critical details.
OpenClaw LCM Guide: Lossless Context Management Explained
Here's a scenario I've lived through more times than I'd like to admit: an agent summarizes a week's worth of memory, and the summary drops a critical detail. Maybe it was a specific error code. Maybe it was the exact sequence of API calls that triggered a race condition. Maybe it was the customer's name. The summary says "fixed auth bug" when what we actually needed was "fixed 403 on /api/session when Redis TTL expires before cookie maxAge, resolved by setting both to 86400s."
That's the fundamental tension in agent memory — you need to compress context to fit token budgets, but compression destroys the details that matter most.
Lossless Context Management (LCM) is Martian Engineering's answer to that problem. I've been running it alongside Hipocampus on our OpenClaw fleet for three weeks, and this guide covers everything I've learned.
Try MrDelegate — Get your own OpenClaw assistant, fully hosted and managed. Start free trial →
What Is LCM?
LCM (Lossless Context Management) is an OpenClaw plugin built by Martian Engineering. Its core purpose: prevent context compaction from discarding information that downstream tasks depend on.
Standard context management works like a lossy codec — it compresses your conversation and memory to fit within token limits, and some information gets dropped. LCM works like a lossless codec — it restructures and prioritizes context without discarding semantic content.
In practical terms, LCM does three things:
1. Dependency-Aware Compaction
Before compacting any context, LCM analyzes which pieces of information are referenced by other pieces. If fact A is needed to understand fact B, and fact B is still in active context, fact A gets preserved even if it would normally be evicted.
Standard compaction treats all context equally — oldest gets evicted first. LCM treats context as a dependency graph and evicts leaves before roots.
2. Semantic Fingerprinting
Every block of context gets a semantic fingerprint — a compact representation of its informational content. When LCM needs to compress, it checks whether the compressed version retains the same fingerprint. If critical semantic content is lost in compression, LCM flags it and preserves the original.
This catches exactly the kind of failure I described in the opening — "fixed auth bug" has a different semantic fingerprint than the detailed version with error codes and TTL values. LCM would reject the lossy summary and keep the detailed one.
3. Priority Tagging
LCM lets you (and the agent) tag context blocks with priority levels:
- critical — never evict, never compress. Exact values, credentials, error codes.
- important — compress with semantic fingerprint validation. Decisions, rationale, architecture choices.
- standard — normal compaction rules apply. General notes, status updates.
- ephemeral — evict first. Debugging output, temporary calculations.
You can set these manually or let LCM infer them from content patterns. In my experience, the auto-inference is about 85% accurate — good enough for most use cases, but I manually tag anything involving production configs or customer data as critical.
Why Default Compaction Loses Context
OpenClaw's built-in context management uses a sliding window with summarization. When the conversation gets too long, older messages get summarized into shorter versions, and the originals are discarded from the active context.
This works fine for casual conversation. It fails for production agent work because:
Specific values get generalized. "Set Redis TTL to 86400" becomes "configured Redis." The specific number — the one you actually need — disappears.
Causal chains get flattened. "We tried approach A, it failed because of X, then tried B, which worked because of Y" becomes "implemented approach B." You lose the reasoning, which means the agent might try approach A again next week.
Multi-step decisions lose their steps. A complex deployment that involved 8 sequential decisions gets compressed into "deployed successfully." If something breaks later and you need to understand what changed, that summary is useless.
Rare but critical details vanish first. A one-time error code that appeared once in 500 messages is the first thing to get evicted. But that error code might be the key to diagnosing a production incident three days later.
I measured this across our fleet. Over 14 days, standard compaction dropped specific values (numbers, error codes, exact configs) 34% of the time. Causal reasoning survived only 41% of compaction cycles. With LCM active, specific values survived 97% of the time and causal chains survived 89%.
Setting Up LCM on OpenClaw
Prerequisites
- OpenClaw 0.9.4+ (LCM plugin support was added in 0.9.4)
- Martian Engineering API key (free tier available, 10K context operations/month)
- 10 minutes
Step 1: Install the Plugin
openclaw plugin install @martian/lcm
```bash
This adds the LCM plugin to your OpenClaw instance. Verify installation:
```bash
openclaw plugin list
```bash
You should see `@martian/lcm` in the output with status `active`.
### Step 2: Configure API Access
Create or update your OpenClaw plugin config:
```bash
openclaw plugin config @martian/lcm
```bash
You'll be prompted for your Martian Engineering API key. Get one at [martian.engineering/keys](https://martian.engineering) — the free tier gives you 10,000 context operations per month, which covers a single agent running moderate workloads.
For multi-agent setups, the Pro tier ($19/mo) gives you 100K operations. We run 7 agents on Pro and use about 60K operations monthly.
### Step 3: Set Compaction Mode
In your `openclaw.json` (workspace root):
```json
{
"context": {
"compaction": {
"provider": "lcm",
"mode": "lossless",
"fallback": "standard",
"priorityInference": true
}
}
}
```bash
Key settings:
- **provider: "lcm"** — routes all compaction through the LCM plugin instead of the built-in engine
- **mode: "lossless"** — full semantic fingerprinting and dependency analysis. Use "balanced" for lower API usage at the cost of some lossy compaction on low-priority context.
- **fallback: "standard"** — if the LCM API is unreachable, fall back to standard compaction instead of failing. Essential for production reliability.
- **priorityInference: true** — let LCM auto-detect priority levels. Set to false if you want manual-only tagging.
### Step 4: Configure Priority Rules (Optional)
Create `.lcm-rules.json` in your workspace:
```json
{
"rules": [
{
"pattern": "error|exception|stack trace|status code",
"priority": "critical",
"reason": "Error details must survive compaction"
},
{
"pattern": "decided|chose|because|rationale",
"priority": "important",
"reason": "Decision context should be preserved"
},
{
"pattern": "env\\.|config\\.|API_KEY|SECRET",
"priority": "critical",
"reason": "Configuration values are exact"
}
]
}
```bash
These regex-based rules augment the auto-inference. They're checked before LCM's built-in classifier, so they take precedence.
### Step 5: Verify
Run a test conversation, let it grow past the compaction threshold (usually 8K tokens), and then check what LCM preserved:
```bash
openclaw context inspect --show-priorities
```bash
You'll see each context block with its priority tag and whether it was compacted, preserved, or evicted. Look for critical items — they should all show status `preserved`.
### Step 6: Monitor Usage
```bash
openclaw plugin stats @martian/lcm
```bash
Shows your operation count, preservation rate, and average compression ratio for the current billing period.
## When to Use LCM vs Hipocampus
This is the question I get asked most. The answer: they solve different problems, and the best setup uses both.
### What Hipocampus Does
Hipocampus manages **cross-session memory**. It's a file-based system that writes daily logs, compacts them into hierarchical summaries, and provides search. When a session ends and a new one starts, Hipocampus is what gives your agent continuity.
Hipocampus operates on **files on disk**. It writes MEMORY.md, SCRATCHPAD.md, daily logs, compaction nodes. It's a persistence layer.
### What LCM Does
LCM manages **within-session context**. It controls how the active conversation window gets compressed as it grows. When your agent has been running for 45 minutes and the context is approaching token limits, LCM is what decides what stays and what gets compressed.
LCM operates on the **live context window**. It doesn't write files. It doesn't persist anything. It optimizes what the model sees right now.
### Why You Want Both
Without Hipocampus: Your agent has great within-session memory but amnesia between sessions. It remembers every detail of today's conversation but nothing about yesterday.
Without LCM: Your agent has great cross-session memory but lossy within-session context. It can look up what happened last week via qmd search, but during a long conversation, specific values get compressed away before the session even ends.
Together: Your agent preserves critical details during the conversation (LCM), writes them to persistent memory at task checkpoints (Hipocampus), and retrieves them across sessions via the compaction tree and search (Hipocampus again). The chain is unbroken.
### Decision Matrix
| Scenario | Recommendation |
|----------|---------------|
| Single agent, light use | Hipocampus only |
| Single agent, complex multi-hour sessions | Both |
| Multi-agent fleet, short tasks | Hipocampus only |
| Multi-agent fleet, mixed workloads | Both |
| Budget-constrained (free tier only) | Hipocampus only |
| Production deployment where detail loss = outage | Both, LCM in strict mode |
I run both on every agent in our fleet. The $19/mo for LCM Pro has paid for itself dozens of times over in avoided re-work from lost context.
## MrDelegate Ships Both Pre-Configured
Every MrDelegate managed hosting plan includes:
**Starter ($29/mo):**
- Hipocampus initialized and configured
- Manual MEMORY.md as fallback
- Standard OpenClaw compaction
**Pro ($99/mo):**
- Hipocampus with full compaction tree
- LCM plugin installed and configured (balanced mode)
- Priority rules tuned for your use case during onboarding
- Cross-agent memory sharing
**Enterprise ($199/mo):**
- Everything in Pro
- LCM in strict lossless mode
- Custom priority rule engineering
- Memory health monitoring and alerts
- Dedicated compaction scheduling
We configure both systems during onboarding. Most customers never touch the config — it works out of the box. But if you need custom priority rules (legal compliance, specific data retention requirements), we set those up during the first week.
The point is: you shouldn't have to become a memory management expert to run AI agents. We've done the optimization work across dozens of deployments. Your agent gets the benefit of all that learning on day one.
## Performance Numbers
Here's what we measured over 21 days comparing three configurations:
| Metric | Standard Only | Hipocampus Only | Hipocampus + LCM |
|--------|--------------|-----------------|-------------------|
| Within-session detail loss | 34% | 34% | 3% |
| Cross-session context rebuilt | 23 incidents | 0 incidents | 0 incidents |
| Specific value survival (compaction) | 66% | 66% | 97% |
| Causal chain survival | 41% | 41% | 89% |
| Monthly token overhead | baseline | +12% | +18% |
| Monthly cost (7 agents) | $0 | $0 | $19 |
| Re-work hours saved | baseline | ~8 hrs/mo | ~14 hrs/mo |
The +18% token overhead from running both systems is real. You're spending more tokens on reading memory files (Hipocampus) and on semantic fingerprinting (LCM). But the re-work hours saved dwarf that cost. At our scale, 14 hours of avoided re-work per month is worth far more than the extra tokens.
## Common Pitfalls
**Pitfall 1: Running LCM without Hipocampus.** LCM preserves details within a session but doesn't persist them. If the session ends and those details weren't written to a memory file, they're gone regardless. Always pair LCM with a persistence layer.
**Pitfall 2: Setting everything to critical priority.** If everything is critical, nothing is. LCM's compression ratio drops to near zero, and you hit token limits faster. Use critical sparingly — error codes, exact configs, customer identifiers. Let standard priority handle the rest.
**Pitfall 3: Ignoring the fallback setting.** If you set `fallback: "none"` and the LCM API goes down, your agent's context management stops working entirely. Always set a fallback. "standard" is the safe choice.
**Pitfall 4: Not monitoring operation counts.** The free tier's 10K operations sounds like a lot until you realize a busy agent can burn through 500 operations in a single long session. Monitor your usage and upgrade before you hit the wall.
**Pitfall 5: Expecting LCM to replace memory.** LCM is not a database. It doesn't store anything. It's a compaction optimizer. You still need Hipocampus (or equivalent) for actual memory persistence.
## The Bottom Line
Default OpenClaw memory is a flat file that doesn't scale. Hipocampus adds persistence, hierarchy, and search across sessions. LCM adds lossless compaction within sessions. Together, they give your agent a memory system that actually works at production scale.
If you're running one agent for personal use, Hipocampus alone gets you 90% of the way there. If you're running agents that handle customer data, production configs, or complex multi-step tasks, add LCM.
Both are available today. Hipocampus is free and open source. LCM has a free tier for evaluation and $19/mo for production use. We pre-configure both on every [MrDelegate](https://mrdelegate.ai) managed hosting instance.
Your agents are only as good as what they remember. Make sure they remember everything that matters.
## Ready to get started?
[Start your free trial](https://mrdelegate.ai/start) and experience the power of OpenClaw, hosted by MrDelegate.
Your AI assistant is ready.
Dedicated VPS. Auto updates. 24/7 monitoring. Live in 60 seconds. No terminal required.