FirstMile Ventures
  • Home
  • Approach
  • Team
  • Portfolio
  • Blog
  • Talent
PItch us
  • Home
  • Approach
  • Team
  • Portfolio
  • Blog
  • Talent

The FirstMile Blog
the latest in tech from the rockies to the rio grande

9/9/2025

Dead Ends & U-turns: How to Manage AI’s Context Rot

 
In my first two posts of our AI series, I explored how today’s AI coding assistants bump into memory gaps when context windows are too short, and why code-generation experiments often feel like working with a very smart, but distractible, junior teammate. In this third installment, I look at the flip side: the problems that arise when context windows are too long. The culprit is a phenomenon researchers call “context rot”.
By, Bill Miller
Picture
What Is “Context Rot”?
Researchers at Chroma recently described the problem in detail under the label ‘context rot’ (https://research.trychroma.com/context-rot). The term refers to what happens when a long conversation with an AI drifts off course because the model clings to outdated or irrelevant parts of the discussion. 

If you’ve ever been deep in a project—planning a product launch, iterating on a financial model, or debugging code—you know that directions change. Dead ends appear. Teams pivot. But AI systems don’t always adapt gracefully. Even after you explicitly abandon an approach, the model may later “resurrect” those discarded ideas and weave them back into its answers. These abandoned threads become “distractions” - leftovers from old paths that dilute the clarity of the current work.

Why It Happens
AI models like ChatGPT and Claude are trained to continue conversations by drawing from the “entire context window” of prior messages. They don’t “know” which parts are still relevant—they only know that those words are in scope. The result:

  • Old paths keep resurfacing.  Abandoned plans or code snippets get treated as still-valid instructions.  
  • Contradictions accumulate. New decisions override old ones in your mind, but the AI tries to reconcile them both, sometimes producing muddled or contradictory responses.  
  • Distractions multiply. Having explored abandoned paths, the model drags them back into focus later, adding noise instead of clarity.  
  • The longer the thread, the higher the risk. As conversations stretch across hours or days, these artifacts of prior thinking persist, and the AI’s ability to sort them correctly degrades.  
This is the essence of context rot: valuable context gets entangled with obsolete context, and the AI can’t always tell the difference.

A Simple Example
In my own use, I’ve seen this play out when working with ChatGPT-5 (thinking mode) and the latest versions of Claude Code. Imagine starting a coding project where you explore “Approach A,” realize it won’t work, and pivot to “Approach B.”

Even after making that switch explicit, the model sometimes falls back on snippets or assumptions from Approach A. It’s not malicious or careless - it’s simply doing what it’s designed to do: predict the next best word given “everything” it has seen. But for you, the human trying to move forward, it feels like your collaborator has forgotten the decision to pivot.

When I instead start a “fresh conversation” with just the relevant context for Approach B, the results are far cleaner and more aligned.

Managing Context Rot in Practice
Until the models get better at distinguishing “current” from “abandoned” context, there are a few strategies you can use:

1. Start fresh when directions change. If you’ve pivoted significantly, spin up a new chat and provide only the context you want carried forward. 
2. Summarize the new direction. Give the AI a clear, short recap of where you are now, almost like a project reset. 
3. Use external memory tools. Some emerging platforms allow you to pin or filter context (think of them as “working documents” the AI can refer back to). 
4. Treat AI like a collaborator who takes notes too literally. Don’t assume it “knows what you mean”—spell out when something should be ignored going forward. 

Why This Matters for Executives
For executives experimenting with AI in workflows—strategy, sales, finance, product—the lesson is this:  

  • AI can accelerate iteration, but it doesn’t manage project history the way humans do. People naturally forget dead ends; AI preserves them.  
  • You need to actively manage the conversation.  Think of yourself as both the project lead and the editor of the AI’s working memory.  
  • The cost of context rot grows with complexity. A quick Q&A rarely suffers, but a weeks-long planning or coding effort can.  

Looking Ahead: Private Orchestration and Memory Control

This is where private orchestration engines like Kamiwaza, a FirstMile Ventures portfolio company, point the way forward. Beyond the public inferencing options everyone has access to, Kamiwaza gives teams much finer-grained control over how inference is managed, including direct control over the model’s “working scratchpad”  or what AI engineers call the “KV cache”.. 

Think of this scratchpad as the AI’s short-term notebook of the conversation:  

  • You can lock in key instructions (such as policies, tool definitions, or guardrails) so they don’t get overwritten.  
  • You can keep only the most recent exchanges, while older material is automatically summarized to avoid distractions.  
  • You can discard stale notes while preserving the essentials, ensuring efficiency and clarity.  
  • You can even have tools store results externally, with the AI pulling back only concise references rather than reloading full payloads each turn.  
When paired with newer open-source models that can recall up to a million tokens, this orchestration approach allows teams to choose between lean, focused sessions for rapid iteration or sprawling transcripts for full-scale audits. 

Taken together, scratchpad control, disciplined prompting, and model selection enable conversations that remain coherent, efficient, and free from distraction. For executives, the message is clear: infrastructure choices—not just which AI model you pick—will determine whether your AI collaborator stays sharp or drifts into context rot.

In short: AI doesn’t yet know what to forget. But with private orchestration and smarter memory management, we’re starting to build systems that can.



Comments are closed.
FirstMile Ventures Logo
Learn more about our...
Approach
Team
​View our...
Portfolio
​Blog
Jobs
Follow us on...
© 2023 FirstMile Ventures. All rights reserved.