Basic Machines
Drew

Beer, Bitching, and the Birth of Basic Memory

Here’s the Problem

Paul and I are old friends who catch up over beers every week. We both fell hard into the world of AI at the same time last year. He came to it as a professional developer with many years of experience, and I came to it as a bemused consumer.

We were both firmly Team Claude at the time, and our conversations about AI circled the same fascinations and frustrations, but as models evolved and more and more kinks were resolved, one complaint remained constant: the lack of true continuity.

Between praising what would be achieved as we pushed the capabilities of careful prompting paired with Claude’s Projects and Project Knowledge, we kept circling back to the same core issue.

You know how it goes. In Claude, it was a question of running out of prompts in overlong chats and using those final prompts to ask Claude to build summaries to paste into the next chat. No matter how many times we asked him to “be more exhaustive,” new chats inevitably had a noticeably different texture and still required a lot of catching up.

GPT was the same. Granted, their memory system is large and growing, yet the core problem isn’t resolved. Have you ever looked at Chat’s memory settings? I have, and, while their memory is incredibly helpful, it remains imperfect and incomplete. What it chooses to remember and how it chooses to remember (in deleteable but non-editable files) inevitably causes weird hiccups and miscommunications.

“No, Chat, remember you LIKED this idea 20 minutes ago. Now you hate it? You gotta be kidding me.”

Even the very best chats are eventually forced to their conclusions, and the context---along with an ineffable textural quality developed by any given chat---vanishes with them.

In a word the experience sucks.

What’s more, each AI conversation is siloed in its own ecosystem. I can have a terrific, helpful exchange with Claude that Chat will never be able to chime in on. And vice versa.

No amount of personal intervention and clever prompting resolves it.

At one point, I was working on a nonfiction book proposal, for which I was continually starting new chats called “This is the one” and “No, this one.” I was repeatedly cutting and pasting context, inevitably screwing up some of my uploads and ending up with dated sections, digging through past chats and downloads to get back on course. I wanted to simply ask Chat or Claude, “Hey, do you remember how we even came up with this section?” Or, “Wasn’t there a better version of this before?” But, hell, they knew even less than I did.

I’ve long since learned to ignore anyone who starts a sentence with the words “Why don’t they just…” But, seriously, why didn’t someone just fix this?

Then, while I complained, Paul did “just” begin to work on a solution. And he stuck with it. For months. Each new version was mine to test drive. I was hooked from V 0.1.

The introduction of Basic Memory instantly changed my AI workflow, eliminating the frustrations I spent months groaning about.

Okay, but what does it actually DO?

Here’s the Solution

Here’s how it goes: I’m chatting with Claude about my book proposal, and I ask it to save some notes about our conversation. Let’s say we have several separate chats. One about market analysis, comp titles, and target audience, another about how best to summarize the book for the synopsis, and then another about the author bio.

A few days later, I sit down and want to continue working on the pitch, but I can’t really remember where things stand. So I say, “Hey, can you read our notes and see what still needs to be done on the book proposal and where we left off? I’m kind of lost.”

And Claude says, “Sure, gimme a sec while I check it out.”

Then it tells me exactly what’s going on, creates a note to mark the level set, and we carry on. He’s locked and loaded with everything we’ve discussed---ready to pick up any thread as if the conversation never stopped.

If you ever struggle to get back on track, it’s kind of incredible to experience.

I’m reluctant to raise this example because Paul has been extremely scrupulous about all things privacy-related, but I imagine it as if an extremely discerning court reporter is listening in on all our chats. But instead of keeping a transcript, the court reporter knows exactly what information they will need to remember to patch together a full picture to a future version of itself and me.

But there is no third party listening in. Claude (or Chat or whatever you’re using) is the one writing the notes, which means it’s always asking itself, “How am I going to explain this not just to Drew, but to myself?”

Here’s what’s cool about the notes. You can read them too. They’re just notes written in Markdown that anyone can read. It’s not a scrambled JSON file or an endless scroll of some unreadable programming language. Each note exists as a nicely formatted, well structured document written in the kind of language you two would use in your chats.

Not only that, you can change the notes anytime you want.

Better yet, thanks to the most recent update, Claude can edit the notes too.

Here’s something that happened a while back. Claude and I were going over all the different projects I’ve discussed with him and talking about the ways in which they’re thematically linked. It was a totally unimportant conversation that had nothing to do with anything, but it created some notes that ended up linking projects in a way that was just not right. It was purely my mistake. Claude would never have connected one business idea I had with a horror novel concept I was tossing around if I hadn’t asked him to find the connection. But the note stuck, and every time I wanted to talk about that business idea, Claude would say, “And it seems like this project is also connected with a horror concept. That’s interesting.”

I could have gone in and changed that note myself. It would have been easy. I just never did, and I rolled my eyes every time the supposed connection popped up.

But the last time it came up in our conversation, thanks to the latest update, I just said, “Hey, that connection doesn’t actually exist. It was just a dumb thought experiment. Can you please get rid of any connections/notes that might make you think they’re connected?” And it did exactly that with no problem.

It really does feel like a second brain that you’re both working from. Picture a jar with scraps of paper bearing all the information you’ve ever discussed with AI. It’s a massive pain in the ass to go looking through the jar of scraps and dig up every conversation on a certain topic. But Claude, with the help of Basic Memory, can do it on his own.

And it’s not just for writing or business planning or brainstorming. Paul uses it for coding every day. It’s as versatile as any AI use case you can imagine.

Every AI conversation finally feels like a continuation---not a reset. That changes everything.

Back to all posts

Interactions

Loading interactions...

Powered by Webmention.io