I was in a meeting today with a handful of colleagues, two of whom are heavy AI users, talking briefly about our experiences. One particular complaint came up: AI gives them noticeably different output between sessions. Sometimes brilliant, sometimes frustratingly dumb.
I use AI constantly. I describe myself as having a Claude Code window open eight hours a day, using it as a second brain. And somehow in that moment, I realized I never deal with this variability. Why was that?
Later, I started thinking about the Reddit forums I read, where people regularly complain that Claude is smart one day and dumb the next. There’s always plenty of agreement and plenty of disagreement in those threads. I’ve always found myself in the disagreement camp—but I couldn’t articulate why.
Then it hit me.
I don’t expect AI to have any memory. I don’t expect any consistency from one session to the next. That assumption has been baked into my workflow for so long that I’d internalized a workaround and made it part of my behavior without really realizing it.
Why AI Has No Memory
This isn’t a bug—it’s how these systems work. Large language models are fundamentally stateless. Every API call starts fresh. The model doesn’t remember your conversation from yesterday, or even from five minutes ago in a different window.
When you’re chatting with Claude in a browser, it appears to remember because the interface keeps sending your entire conversation history with each new message. But that’s the application doing the work, not the model. Close the tab, and you’re starting over.
There’s also a layer of genuine randomness built into these systems. A parameter called “temperature” controls how deterministic the output is. Even at low temperatures, variation can creep in from floating-point math across different hardware. You might send the exact same prompt twice and get measurably different responses.
So when people say “Claude is dumb today,” they might be experiencing any combination of: different context, different phrasing, different system state, or just the inherent statistical variation in how these models generate text.
My Accidental Solution
I barely ever use AI in a browser. I go straight to the command line—Claude Code, Codex, Gemini, doesn’t matter. I want the AI to directly interact with my filesystem.
When I’m starting anything—even a conversation—I’ll typically have AI create a markdown file called README.md. (I use this name because if I later turn the conversation into a GitHub repository, README.md is the file that’s displayed when you open the repo.) As I work on something, whether it’s a coding project or something else, I’m routinely asking the AI to update its markdown file.
I wrote about this workflow for meetings. My typical prompt to Claude Code before a meeting is something like: The meeting is about to start. For the remainder of this conversation, as I provide any new information, incorporate it into the agenda at Meeting-Topic-Date.md, and sync with GitHub. When the meeting is over, my notes are 95% ready to distribute.
My heavy use of markdown files becomes AI’s memory for the task at hand. If I return the next day, I ask the AI to re-familiarize itself with the project by reading the markdown files. Then I pick up where I left off.
Treating AI Like a Stateless Service
Software engineers will recognize this pattern. When you build web applications, you don’t assume the server remembers anything about previous requests. You externalize state—to databases, caches, session stores. The server itself stays stateless, which makes it reliable and scalable.
I accidentally applied the same principle to AI. By writing everything to files, I created my own persistence layer. The AI can be stateless all it wants. My project’s state lives in the filesystem, and I reload it at the start of every session.
This is why I don’t experience AI variability. It’s not that the model is more consistent for me. It’s that I’m giving it the same context every time I resume work. The model still has no memory—but my files do.
The Practical Takeaway
If you’re frustrated by inconsistent AI output, ask yourself: are you expecting the model to remember things it can’t remember? Are you starting fresh conversations and hoping they’ll pick up where the last one left off?
Instead, try this:
- Start every project with a markdown file. Have the AI create and maintain it as you work.
- Update the file continuously. Don’t just use AI as a chat interface—use it as a document editor that happens to understand natural language.
- Reload context explicitly. When you come back to a project, tell the AI to read the files first. “Re-familiarize yourself with this project” is a prompt I use constantly.
The AI doesn’t need to remember. It just needs to be briefed.