Building Agents? Stop Treating messages[] Like a Database
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.

There's a pattern showing up consistently across organizations experimenting with AI: the technology works, but the results disappoint. A new tool gets introduced, a process gets partially automated, and a few months later the ROI conversation gets awkward.
Agentic AI systems don't just expose what organizations can do, they expose how organizations actually work. And for most, that's an uncomfortable thing to look at. The workflows held together by tribal knowledge, the accountability gaps papered over by good intentions, the processes that exist because no one ever stopped to question them… AI doesn't create these problems. It just makes them impossible to ignore.
The diagnosis is usually some version of "adoption challenges" or "change management." But that framing lets the real issue off the hook.
The problem isn't that employees are resistant to AI. It's that most organizations don't have a clear picture of how work actually flows through them, and AI has a way of making that embarrassingly visible.
That's the real reason so many AI initiatives underdeliver. Not because the technology doesn't work, but because you can't automate your way out of operational dysfunction.
For the better part of a decade, digital transformation was essentially a technology procurement exercise. Pick the right platform, find an implementation partner, go live, move on. The assumption was that good systems would naturally improve how people work.
AI is exposing how incomplete that assumption was. When you try to introduce an AI assistant into a workflow, you immediately run into questions that no one has clean answers to: Who owns this decision? Where does this information actually live? Why does this step exist? What happens when the output is wrong?
Employees aren't sure how AI fits into their day-to-day work, so they default to ignoring it or working around it. Teams hesitate to act on AI-generated outputs because no one has established when to trust them. Workflows are built entirely around human handoffs, with no clear path to automation. Institutional knowledge lives in people's heads or buried across a dozen systems. And governance conversations—necessary ones—end up functioning as a brake on any experimentation at all.
These aren't technical questions. They're operational ones. And they've been sitting unanswered under the surface of most organizations for years.
What separates organizations that are seeing meaningful results from those stuck in pilot purgatory usually isn't the AI they chose, it's what they did before selecting any tool at all.
They started by mapping how work actually moves through the organization— tracing where manual coordination creates drag, where the same analysis gets done repeatedly by different people, where knowledge is so fragmented across systems and teams that finding it becomes a job in itself. From there, they focused on a small number of high-value workflows and redesigned those before automating anything.
That sequencing matters more than most leaders realize. AI is genuinely good at the kinds of work that slow teams down most: analyzing large volumes of information, generating first drafts, summarizing complex material, coordinating tasks across systems. But it performs best when it's embedded into a well-designed process, not when it's patched onto a fragmented one.
The consulting and implementation world is still largely organized around the old model: scope the technology, build the system, train the users, close the engagement. That model made sense when the hard problem was getting software to work.
That's not the hard problem anymore.
The organizations that need the most help right now aren't struggling to find capable AI tools. They're struggling to answer foundational questions about their own operations: where AI creates real value versus surface-level efficiency, how to structure human and AI decision-making together, and how to build the internal capability to keep evolving as the technology does.
Professional services firms that can engage at that level—upstream of implementation, helping clients develop operational clarity before any build begins—are going to be significantly more valuable than those still leading with technology delivery.
If you're trying to move beyond experimentation, the most useful thing you can do isn't evaluate more tools. It's get honest about where your operations are actually fragmented.
Start by mapping how work moves across your teams and systems. Look for workflows with significant manual coordination, repeated analysis or content generation, or knowledge spread across people and tools in ways that make it hard to access reliably. Pick a small number of those—the ones where the inefficiency is most costly—and examine them closely before reaching for a solution.
The goal isn't to find places where AI can save a few steps. It's to ask harder questions about the work itself: Where should AI assist people, and where should it operate independently? Which decisions genuinely require human judgment, and which ones only seem to because that's how it's always been done? Which steps exist purely because of organizational silos rather than any real necessity? And if you were designing this workflow from scratch today, would it look anything like what you have?
This is harder work than running a pilot. It requires leadership alignment, cross-functional honesty about how work actually happens, and the willingness to fix things that have been broken for a long time. But it's the work that determines whether AI delivers compounding value or just a series of localized wins that never add up to anything.
The organizations that treat operational clarity as a prerequisite — rather than a problem to solve later — are the ones that will have something real to show for this.
This is part two in a five-part series on the confrontations agentic AI forces organizations to face. Read the rest here Part 1, Part 3, Part 4, and Part 5.
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.
Traditional approaches to change management weren’t working before. AI just makes the gaps impossible to ignore.
How smart companies are evolving with agent-powered delivery models, and what it takes to lead in the new era of intelligent services.