Your AI Chats Are Getting Worse. Here's Why

This problem is well known, and easy to fix once you understand it. Here's when to keep going and when to start fresh.

Learn Prompting Newsletter

Your Weekly Guide to Generative AI Development

GenAI News

Your AI Chats Are Getting Worse. Here's Why

This problem is well known, and easy to fix once you understand it. Here's when to keep going and when to start fresh.

Hey there,

Here is the question that every AI user eventually runs into. Is it better to keep one long running conversation going or to start fresh each time? Most people decide this simply by falling into one habit or the other without ever taking a step back to consider. This choice has consequences though, and getting it right can impact how well the tool performs for you. This week we’ll explore how memory works and answer the question of whether you should be starting new conversations or sticking with longer ones. 

The three ways AI carries information

There are three main ways that AI models retain information that we’ll focus on today. You’re probably familiar with each method but they all work differently. 

The first way is the simplest, having one long conversation that you’ve built over days or weeks. This option fills the context window with the previous chat interactions which means the model can “see” the entire conversation every time it responds. This might feel like the natural choice, but in reality the context window is limited and model performance degrades as you fill it with multi-turn interactions. This problem is common enough that it has a name, context rot. A benchmark from 2025 found that once conversations hit 32k tokens, 10 of the 12 models tested saw their performance drop by 50% when compared to short context interactions. Interestingly, context rot doesn’t affect your entire conversation equally. Research shows that once an AI uses more than 50% of its context window, it begins to forget the older content. So in practice, our long running conversations aren’t actually pulling information equally from across your chat but instead favoring the later portions while the beginning can fade away. 

The second way AI models can retain information is by using the “Memory” feature. Instead of continuing a conversation for extended periods, you start fresh and use the default memory features within apps like ChatGPT and Claude. While this allows you to keep your chats much shorter, the recall can be selective. The AI usually picks what it considers to be relevant and it’s not necessarily what you would select. Each tool uses memory slightly differently, ChatGPT creates a persistent user profile that can consist of 30-60 short facts. Claude searches your past conversations as opposed to using the condensed summary. 

Scoped workspaces like Claude and ChatGPT Projects are the third option. They sit uniquely between the other two and can often be overlooked. By creating a defined space for one focused topic like your work or a book you're writing, you can add documents, instructions, and context that matters. When you create a new chat within that project, it will inherit the uploaded context automatically. It’s important to note that the context around each project is walled off from other projects and traditional chats so you aren’t impacting the rest of your work. You can also make these projects as focused as you want so you don’t need to have a “work” project, but instead can create a “May marketing sprint” that focuses on just one milestone. 

So which method should you be using?

Shorter, leaner conversations are essentially always better for the model. While having a long conversation isn’t terrible, it should be avoided or at least minimized when possible. 

An important caveat to remember is that “long conversations” means different things to different people. If you only spend a few minutes talking to AI each day, you can keep conversations going for much longer than someone who spends hours going back and forth with AI every day. What really matters is the volume of prompts and responses, not the number of hours or days a chat is active. So for frequent AI users, creating new conversations each day will result in better performance while casual users can keep conversations going over longer periods. 

A specific case worth calling out is when you’re working with large documents. If you need to use AI to break down a large paper, video or code base, don’t try to ask all of your questions in one session. Breaking the work into smaller, more focused chunks ensures that your model’s memory and performance stay at their highest level. 

One of the most common failures in AI today is that people allow their conversations to go on well past the point where they start to degrade by slowly accumulating days or weeks worth of context. To be fair this can often go unnoticed if you are constantly building off of the older content. For example, if you need to create a weekly marketing email and you use the same chat to repeatedly create these, you might think the AI doesn't need to remember the first iteration as long as it can recall the last four. But the context rot research shows that recent recall isn't guaranteed either, and the model is reasoning worse across the board anyway.

Where are you with AI?

We are trying to understand how you actually use AI right now, what tools you reach for, and where your team or environment sits on this. The answers shape what we build next here.

The last question asks if you would like to take this further with a 30-minute call. That part is optional. If you say yes, we will reach out as we book interviews over the coming weeks, and we send a $20 Amazon gift card afterward as a thank-you.

My Thoughts 

I use AI on a daily basis across all kinds of different work and the best habit I've gotten into is knowing when to start over. When a conversation starts to feel off and the model starts making mistakes or can’t follow the instructions it was handling fine earlier in the chat, that is the sign that context rot is occurring. The solution I use is to start fresh.

Another habit I started is to use skills to help reduce my dependency on long conversations. When you find yourself constantly returning to a specific chat because of the unique context it has, it might make sense to turn that interaction into a skill. This allows you to formalize the process, and the background context needed to repeat the process in other chats. After creating a series of skills based on my longer chats, I’m now able to effectively start fresh and without losing the process and essential background information that made my old chat so useful. 

I’m always working to improve my AI interactions and conversational length has become a constant consideration for me. My recommendation is to start improving how you interact with AI models by using skills, projects, and fresh conversations together. That’s how you get consistently good responses from these tools.

Reply

or to participate.