# epsiclaw Article: https://dorringel.github.io/2026/03/25/epsiclaw/ Date: 2026-03-25 Summary: The Karpathy treatment for OpenClaw. 515 lines, 6 files. --- I’ve been following Andrej Karpathy for almost a decade now. First the Stanford lectures, then the YouTube series. Always loved reading his posts and watching his videos, both because he’s such a down-to-earth guy, and because of this unparalleled ability to strip complex systems down to their basic principles and deliver them in a way that just clicks. micrograd showed that backpropagation - the thing that powers every neural network on earth - is 94 lines of Python. nanoGPT showed that GPT-2 training is two files. llm.c showed you don’t even need Python - it’s just matrix math that C handles directly. Every time, you walk away thinking: that’s it? That reaction is the point. The algorithm is always small. The systems around it are large because they solve real problems - auth, sandboxing, multi-tenancy, geo-distribution, observability, rate limiting - but those problems are distinct from the algorithm itself. It’s the classic engineering moment: “the POC works, from here we just need auth and scale - basically algebra.” Then the algebra takes three years and 500K lines. Karpathy’s gift is making that boundary visible. the claw explosion Nowadays the main thing is OpenClaw. If you’ve been anywhere near the AI agent space in the past few months, you know the story. OpenClaw is the open-source personal AI assistant - 522,000 lines of TypeScript (and growing), 331K GitHub stars, 50+ integrations, a plugin marketplace, vector memory, sandboxed tool execution, the works. It lives on your phone, knows who you are, and works for you while you sleep. It’s genuinely impressive engineering. And the ecosystem around it is exploding. Every week a new library appears. NanoBot reimplements it in Python at 99% less code. ZeroClaw rewrites it in Rust. NanoClaw adds container isolation. PicoClaw shrinks it to run on a $10 board. MimiClaw puts the whole thing on a $5 microcontroller in bare-metal C. There are forks of forks. AI is writing a lot of the code, which means the repos are getting bigger faster than anyone can read them. And with every new layer that gets added - another security wrapper, another channel adapter, another config system - it gets harder to see the algorithm underneath. Everyone is building on top. Few are digging down. what would Karpathy do? At some point I asked myself that question, and the answer was obvious. He’d do what he always does. He’d read the source of the real thing, find the few hundred lines that matter, and throw away everything else. The core is identical across all of them. Receive a message. Build the context - the assistant’s persona, the user’s identity, what it remembers, and the conversation so far. Send it to an LLM with a list of tools. If the LLM asks to call a tool, execute it and loop. If the LLM returns text, send it back to the user. A cron scheduler fires reminders on a timer. Three markdown files hold everything the agent knows about itself and its user. That’s a personal AI assistant. Everything around it - the plugin registries, vector databases, multi-provider failover, sandboxed runtimes, auth flows, config factories - is efficiency. Important for production. Not the algorithm. Or as Karpathy wrote about microgpt: “Everything else is just efficiency. I cannot simplify this any further.” epsiclaw epsiclaw - as in epsilon-sized claw - is what’s left after removing everything that isn’t the algorithm. agent.py 146 lines - the ReAct loop (receive → LLM → tool → loop) tools.py 155 lines - tool registry + 7 built-in tools memory.py 65 lines - context from SOUL.md + USER.md + MEMORY.md + conversation cron.py 63 lines - persistent job scheduler channel.py 53 lines - Telegram long-polling + send llm.py 33 lines - one function: POST to any OpenAI-compatible API --- 515 lines total Same code, one conversation — tools, cron, and memory in action: $ python agent.py epsiclaw is running. Accepting chat IDs: {'********'} [********] user: /start [********] bot: What's up? [********] user: Can you remind me to call my mom in exactly five minutes? [********] tool: get_current_time() [********] tool: cron_add(description='call my mom', datetime_str='2026-03-25 23:34') [********] bot: On it. [********] cron: call my mom [********] bot: Time to call your mom. [********] user: Thanks, appreciated. [********] tool: memory_write(content='The user appreciates reminders and expresses gratitude when helped.') [********] bot: Noted. I'll remember that. One dependency: httpx. Six files. The file tree maps directly to the architecture - you can see the design in the directory listing. Memory is a markdown file the AI reads and writes. No vector database. No embeddings. The LLM already understands text - the embedding step is an optimization for scale, not the algorithm. One channel (Telegram), because the algorithm doesn’t change whether input comes from Telegram, Slack, or a CLI. One LLM provider, because retry logic is reliability engineering, not the core loop. No sandboxing, because security is critical in production but it’s not what makes a claw a claw. what makes it different from the other minimal claws Intent. NanoClaw says “small enough to understand.” MicroClaw says “now in Rust.” openclaw-mini says “learn how OpenClaw works.” They’re all valuable projects. But they’re removing features to ship a product - a smaller product, a safer product, a product in a different language. epsiclaw removes features to reveal the algorithm. Like micrograd isn’t a small PyTorch - it’s the answer to “what is backpropagation?” Like nanoGPT isn’t a small HuggingFace - it’s the answer to “what is GPT training?” epsiclaw is the answer to “what is a personal AI assistant?” The answer turns out to be: a channel, a memory, an agent loop, and a cron. 515 lines. Readable in one sitting. Epsilon: not the smallest, but the smallest that still matters. try it git clone https://github.com/dorringel/epsiclaw.git cd epsiclaw pip install httpx cp .env.example .env # fill in API keys python agent.py Send a message to your bot on Telegram. Ask it to remind you to call your mom in five minutes. Watch it remember. GitHub ->