
Why AI Forces a Rethink of Change Management
Traditional approaches to change management weren’t working before. AI just makes the gaps impossible to ignore.
When I first started building agents, I followed the examples. Like most developers, I handled state by appending everything to a single growing messages[] transcript: user prompts, model replies, tool-call requests, and tool results. It worked— at first.
But as my agent’s logic became more complex, cracks started to show. The model would repeat tool calls it had already run. Prompt templates ballooned with irrelevant context. I was writing functions to parse the messages[] transcript just to extract the right tool-call result when constructing prompts. It started to feel strange. There had to be a better way.
The deeper problem wasn’t the tools themselves, it was my assumption that the transcript was the state.
LangGraph gave me a new lens. Instead of treating the transcript as a catch-all for everything the agent might need to “remember,” I could model structured state explicitly: targeted inputs for each node and isolated fields for tool results, usage history, and other reusable data. Even certain assistant responses—like internal reasoning or chosen strategies—deserve their own state fields if they’ll be reused later. This state lives in the LangGraph thread object and persists across runs, letting me feed the model smaller, more-focused prompts with exactly the context it needs, no transcript archaeology required.
This post walks through that shift and how adopting structured state makes agents more robust, reliable, and ready for real use.
Most demos turn the transcript into an accidental database. Instead, lift heavy data into structured state and keep the transcript lightweight:
// Before
graphState = {
messages[] //a mix of user prompts, assistant replies, tool call requests, toolcall results
}
//After
graphState = {
messages[], //chat flow only: user prompts and assistant replies worth showing to the user
toolResults,
userPreferences,
…other reusable data
}
The messages[] transcript stays readable and short; the agent’s real “working memory” lives in dedicated fields contained in the graph’s state, where it’s structured, inspectable, and easy to reuse. The datatype of these dedicated fields is entirely up to you - for example a string, boolean, number, or an object - whichever best fits your needs.
Most tutorials set up an LLM with tool calling enabled and then append both the tool-call request and its JSON result straight into the messages[] log. Now imagine a user asks, “What’s the forecast for this weekend?” — and then keeps chatting.
That means longer prompts, more tokens, and fragile reasoning.
Lift the data into state:
graphState["searchResults"] = results_json
…then inject only what each node needs:
Here are the search results:
{{searchResults}}
Now each user message can reuse or refresh graphState.searchResults without bloating the transcript, and the LLM always sees the right data in a clean prompt.
LangGraph makes this possible, but it doesn’t force it. Making the leap is up to you. The same logic applies to reusable assistant insights: summaries, recommended actions, parsed conclusions. Store them in state, not in the messages[] array.
Using messages[] for everything works— for a while. But as agents grow, this pattern becomes brittle, opaque, and expensive.
Treat the transcript as a transcript, not a brain. Structured state preserves what matters—tool results, assistant reasoning, decisions—bringing clarity, determinism, and real-world reliability.
Traditional approaches to change management weren’t working before. AI just makes the gaps impossible to ignore.
How smart companies are evolving with agent-powered delivery models, and what it takes to lead in the new era of intelligent services.