Building Custom OpenClaw Skills: A Developer Guide
March 29, 2026 · MrDelegate
OpenClaw skills are what separate a generic AI assistant from an agent that actually knows your tools, your workflows, and your context. A skill is a self-contained instruction set that tells the agent exactly how to use a specific capability — from a simple web search wrapper to a complex multi-step workflow that reads files, calls APIs, and stores results across sessions.
This guide walks through building skills from scratch: the file structure, writing your first SKILL.md, adding external API calls, managing state, and publishing to ClawHub so others can use what you've built.
Skill File Structure
Every OpenClaw skill lives in its own directory and requires exactly one file: SKILL.md. That's the contract between the skill and the agent. Everything else — scripts, reference documents, config files — is optional scaffolding.
A minimal skill directory looks like this: a folder named after your skill containing SKILL.md at the root. If your skill needs helper scripts, put them in a scripts/ subdirectory. Reference documents, examples, and lookup data go in references/. The agent reads SKILL.md and follows its instructions; it can also read supporting files when the skill directs it to.
SKILL.md has four essential sections: a description of what the skill does (this is what the agent uses to decide whether to load the skill), prerequisites (any env vars, credentials, or tools required), the step-by-step instructions the agent follows, and output format. Keep the description tight — it's used for skill matching, and vague descriptions lead to the wrong skill being loaded at the wrong time.
Writing Your First Skill
Start with the simplest possible version that does one thing well. A weather skill, for example, just needs to know how to call wttr.in with a location parameter and format the response. Don't build in caching, historical lookups, or multi-city comparison on the first version.
The most common mistake in first-time skill authoring is writing instructions for the agent the same way you'd write documentation for a human — with implicit assumptions, skipped steps, and context the reader "should" have. Agents don't have context. Every step needs to be explicit. If the skill requires reading a config file before making an API call, write "read the config file at [path]" as a numbered step, not an afterthought.
Test your skill by actually triggering it. Ask the agent to do what the skill is designed for, and watch what happens. If the agent does something different from what you intended, the fix is almost always in the description (to help the agent select the right skill) or in the step precision (to help the agent execute correctly once loaded).
Adding API Calls
Skills that call external APIs need to tell the agent where credentials live and how to handle failures. The standard pattern: store API keys as environment variables, reference them by name in SKILL.md, and note what the agent should do if the key is missing or the API returns an error.
For a skill that calls a REST API, your instructions should specify: the endpoint, the auth header format, required and optional parameters, how to parse the response, and what to return to the user. Be specific about response parsing — "extract the temperature field from the JSON response" is clearer than "get the weather data."
Rate limiting is something many skill authors forget until it matters. If you're building a skill that could be triggered in a loop (summarizing a list of URLs, processing a batch of records), add an explicit note about rate limits and include a recommended delay between calls. An agent that hammers an API until it gets 429'd is a broken skill, not an API problem.
For skills that need OAuth flows or multi-step authentication, use a scripts/ file to handle the auth logic and have SKILL.md instruct the agent to execute that script rather than trying to embed complex auth flows inline in markdown.
Memory and State
OpenClaw agents are stateless between sessions by default, but skills can create persistence by reading and writing files. If your skill needs to remember something across sessions — last run timestamp, cached results, user preferences — the pattern is simple: define a state file path in SKILL.md and instruct the agent to read it at the start and write to it at the end.
A heartbeat skill that checks email might write the last-checked timestamp to a JSON file in the workspace. Next session, it reads that file to know where to start. This is the same pattern used by built-in OpenClaw skills — straightforward file I/O, not any special state management API.
Keep state files small and structured. A flat JSON object with clearly named keys is better than a log file the agent has to parse. And document the state schema in SKILL.md — if another agent or session needs to read your state, they'll need to know what to expect.
Testing Before Publishing
A skill isn't ready to publish until you've tested the unhappy paths: missing credentials, API timeouts, malformed input, the skill being triggered in the wrong context. Write at least three test scenarios in a references/test-cases.md file: one happy path, one error case, one edge case.
Check that your skill description doesn't overlap with other skills you have installed. Ambiguous descriptions cause the agent to load the wrong skill — a common source of confusing behavior that looks like an agent bug but is actually a skill matching problem.
Publishing to ClawHub
ClawHub is the registry for OpenClaw skills. Publishing is straightforward with the clawhub CLI: authenticate with your ClawHub account, run clawhub publish from your skill directory, and the CLI handles packaging and upload. You'll set a version number, a public description, and tags that help others discover your skill.
Before publishing, strip any hardcoded credentials from your skill files (they should already be using environment variable references). Add a clear README section to SKILL.md explaining what env vars are required and how to obtain them. Skills that require setup without documenting the setup process get poor ratings fast.
Version discipline matters once your skill is published. Breaking changes — changes to the state schema, renamed env vars, changed output format — should be a major version bump. Minor improvements and bug fixes can be patch versions. People building on your skill need to know when to update and when to hold.
The best OpenClaw skills do one thing, do it reliably, and document every assumption. Start narrow, ship fast, iterate based on how people actually use it. That's the same rule that applies to any good software.
Let MrDelegate handle this for you
See Plans — From $29/mo →