Build your own OpenClaw and you'll learn fast — memory is what separates a chatbot from an agent.
A chatbot forgets when you close the tab.
An agent remembers.
This post breaks down exactly how the build-your-own-OpenClaw tutorial implements memory, why the implementation matters, and the pattern you should keep when you customise.
Why Memory Is The Hardest Part
It's not technically hard.
It's conceptually hard.
Three layers of memory most people conflate:
1. Conversation memory — what we just said in this chat.
2. Session memory — what we said in chats from this week / month.
3. User model memory — who I am, my preferences, my project context.
Most chatbots only have layer 1.
Real agents have all three.
The build-your-own tutorial implements all three across steps 4-5 and beyond.
🔥 Want my Build Your Own OpenClaw memory architecture deep dive? Inside the AI Profit Boardroom I've documented the three-layer memory pattern, the file structures I use, and the customisations that make memory genuinely useful in daily work. 2,800+ members already running custom memory layers. Plus weekly coaching to refine yours. Click below. → Get the memory architecture deep dive
Step 4 — Write Memory To Disk
In step 4 of the build-your-own tutorial, every message gets written to a memory.md file.
Simple structure:
## [Timestamp] User
Message text here
## [Timestamp] Agent
Response text here
## [Timestamp] User
Next message...
Plain markdown.
Append-only.
No fancy database.
Beautiful in its simplicity.
The file just keeps growing as you talk to your agent.
If you want to see how this scales in production, my Hermes agent workspace post covers how Hermes handles long-running memory files.
Step 4's Deliberate Limitation
Here's the lesson the tutorial teaches at step 4 by leaving something out.
Memory gets written.
Memory does NOT get read back.
So if you tell your agent "my favourite colour is teal" in one session, close the chat, reopen — the agent has no idea who you are.
The information was saved.
It just wasn't loaded.
That gap is intentional.
It teaches you that memory write and memory read are separate problems.
Most chatbot frameworks gloss over this distinction.
The build-your-own tutorial puts it in your face.
Step 5 — Slash Commands For Recall
Step 5 introduces slash commands.
/sessions — list previous conversations.
/resume — load up a previous session and continue.
This is where memory becomes useful.
You can now:
- See what you talked about yesterday
- Pick up where you left off
- Reference past context
The agent doesn't auto-load — but you can load on demand.
That's a deliberate design choice.
Auto-loading every previous conversation explodes token usage.
On-demand loading keeps cost down while preserving access.
I covered the cost-vs-context tradeoff in my generic agent post — same principle, different framing.
Phase 2 — Auto-Load And Context Compression
Once you reach Phase 2 of the tutorial, memory gets smarter.
The agent automatically loads:
- Recent conversation context (last N messages)
- Relevant past sessions (semantic search over memory)
- A summary of older context (compressed, not full)
This is similar to how Hermes implements its memory layering.
You don't have to manually /resume — the agent does it implicitly when relevant.
Context compression is the trick.
You can't load everything (token limit).
You can summarise older context aggressively while keeping recent context verbatim.
That's what gives the agent memory without exploding cost.
The User Model — memory.md vs user.md
Here's where the architecture splits memory into two files.
memory.md — what we said. The conversation log.
user.md — who you are. The model of YOU that the agent maintains.
Examples of what goes in user.md:
- "User prefers UK English"
- "User runs a content business"
- "User is most productive in the mornings"
- "User dislikes corporate jargon"
The agent updates user.md over time as it learns about you.
This file gets injected into every system prompt.
So even when conversation memory is fresh, the agent already knows who you are.
This is exactly the pattern Hermes uses in production — covered in my Hermes ai course post.
The Skill File As Memory
The build-your-own tutorial blurs another line.
Skills are markdown.
When the agent learns better ways to do something, it updates the skill markdown.
That's also memory — procedural memory rather than episodic.
Three memory types now:
Episodic memory — what we said (memory.md).
Semantic memory — who you are (user.md).
Procedural memory — how to do things (skill files).
Real memory architecture in three plain text files.
Elegant.
🔥 Want my custom memory file structures for OpenClaw? Inside the AI Profit Boardroom I've documented the memory file structures I use across multiple custom agents — episodic, semantic, procedural, plus a fourth: project memory. 2,800+ members already running these structures. Click below. → Get the memory file structures
Persistence — What Gets Backed Up
Your custom agent's memory is your most valuable asset after a few months.
Lose it, lose the work.
Three things to back up:
1. memory.md — full conversation history
2. user.md — the model of you
3. skills/ folder — all your custom skills
Set up a simple Git commit on these every day:
cd ~/my-openclaw
git add memory.md user.md skills/
git commit -m "daily backup"
git push
Push to a private GitHub repo.
If your machine dies, you can restore in 5 minutes.
For the broader backup discipline, my Hermes vs OpenClaw post covers backup patterns across both production agents.
Privacy — What's In Your Memory Files
Your memory.md will accumulate sensitive stuff over time.
Project details.
Client conversations.
Personal info.
Two privacy considerations:
1. Local only? Keep memory files on your machine, don't sync to cloud. Slowest but most private.
2. Encrypted backup? Use git-crypt or similar to encrypt memory files in your backup repo.
Don't be casual about this — your AI agent's memory is effectively a personal diary plus client context plus business intelligence.
Memory Decay And Refactoring
Memory files grow forever if you don't manage them.
After 6 months, your memory.md will be huge.
Tokens for loading it will spike.
Performance degrades.
Solution — periodic memory compression.
Once a quarter, ask your agent:
"Summarise the last 90 days of our memory.md into a condensed version. Keep specific names, projects, decisions, and lessons. Drop small talk and one-off questions."
The agent produces a compressed summary.
Replace the old memory file with the compressed version (back up first).
Now you've got a smaller, faster file with the important stuff intact.
This is the same pattern Hermes uses internally — automatic compression of older sessions.
I covered the compression strategy in my generic agent token efficiency post — different agent, same pattern.
Build Your Own OpenClaw Memory FAQ
How big can memory.md get before it's a problem?
Practically — 50K-100K tokens before context loading becomes slow. Compress around there.
Can I share memory between two agents?
Yes — point both at the same memory.md. Useful for collaborative agents.
Should the agent be able to edit user.md?
Yes — that's how it learns about you. But back up before letting it edit.
What if memory gets corrupted?
Restore from your daily backup. (You ARE doing daily backups right?)
Can I encrypt memory at rest?
Yes — use disk encryption (FileVault, LUKS) or file-level encryption (git-crypt). Both work with markdown.
Does the agent forget if I delete memory.md?
Yes — memory is just the file. Delete it = fresh start.
Related Reading
- Hermes agent workspace — production memory patterns
- Generic agent — context efficiency
- Hermes ai course — memory file architecture
Final Take
Build your own OpenClaw memory and persistence layer and you'll never look at AI tools the same way again.
Three files.
memory.md, user.md, skills folder.
That's the entire memory architecture of a real AI agent.
Build it.
Maintain it.
Back it up.
Six months in, you'll have an agent that genuinely knows you and your work — and you'll understand exactly how it knows.
🔥 Ready to build the memory layer that makes your agent feel alive? Get a FREE AI Course + Community + 1,000 AI Agents 👉 join here. Or grab the full memory architecture inside the AI Profit Boardroom.
Learn how I make these videos 👉 aiprofitboardroom.com
Video notes + links to the tools 👉 skool.com/ai-profit-lab-7462
Build your own openclaw memory is the layer that makes the agent yours — get the architecture right and everything else compounds.