Building Agents? Stop Treating messages[] Like a Database
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.

For the better part of two years, the conversation about agentic AI has been organized around capability: what it can do, how fast it moves, where it fits in a workflow. As agentic systems move into mainstream use, the more consequential question isn't what they can do, it's what customers will come to expect because of them, and whether the organizations behind those experiences are actually built to meet that bar.
Customer expectations aren't static, they're calibrated continuously against every experience a person has. And right now, the experiences people are having with agentic systems are rewriting their intuitions about what responsive, intelligent service feels like. Gartner forecasts that 60% of brands will use agentic AI to deliver streamlined one-to-one interactions by 2028. That's not a niche trend arriving in the future. It's the new baseline being set today, in the experiences customers are already having with the brands moving fastest. The reset is already underway, whether organizations have acknowledged it or not.
One of the most predictable pressures that follows is the push toward "more for less." Customers, sensing that AI has reduced the cost of production, will press for better outcomes at lower prices. That pressure is real. But competing on that axis alone is a race to the floor, and it misunderstands where value is actually moving.
What agentic AI is genuinely changing is the ceiling of what's possible to deliver, not just the cost of getting there. Work that once carried significant overhead is increasingly built into the foundation. The things that were once considered premium deliverables are becoming table stakes. That creates an obligation to reframe the conversation with customers rather than simply absorbing the pressure to discount. The relevant question isn't “What can we give you for what you used to pay?” It's “What can we offer now that wasn't possible before?”
Deeper expertise, applied earlier and more consistently, is the differentiator worth defending. Not speed alone, and not volume, but the quality of judgment that AI makes more accessible— that's where the real value conversation should be happening.
Most agents are built to feel human: warm, responsive, confident. That makes sense as a starting point, but it creates a problem that's difficult to design around. No human is as fast as a well-built agent, or as consistent, or as available. The very things that make agents useful are the things that make them unlike any human interaction a customer has actually had.
When that gap becomes noticeable—when the experience glitches, or responds in a way that feels slightly off—customers feel it acutely. It's the same discomfort that makes near-realistic CGI faces feel uncanny: the closer something gets to human without quite arriving, the more unsettling the gap becomes.
The instinct is usually to solve this by making agents more human. But that approach may be chasing the wrong target entirely. The more honest solution is to stop pretending. Not in a cold or utilitarian way, but in a way that gives customers accurate expectations about what they're interacting with and what it can reliably do.
Organizations that set those expectations clearly, and then meet them consistently, are likely to build more trust than those that sell a human-feeling experience and periodically fail to deliver it. Transparency about the nature of the interaction isn't a concession. In the current environment, it may be a genuine competitive advantage.
There's a pattern that appears regularly when organizations begin building agentic customer experiences: the ideas arrive before the readiness does. Strong instincts about what to create, sometimes with builds already underway, but less time spent on the infrastructure underneath. The guardrails, the oversight mechanisms, the organizational clarity about who owns what when the system does something unexpected.
At some point, a scoping conversation about what to build becomes a change management conversation about whether the organization is ready to support what it's promising. The technical ambition is often exactly right. But the organizational readiness to stand behind it, reliably and at scale, isn't always there yet.
That's the expectation reset in its most practical form. It isn't just about what customers want from a product. It's about whether the organization behind the product can actually show up the way the product promises.
This is where the previous confrontations in this series converge on the customer relationship. Operational clarity, workforce readiness, delivery integrity: none of that was internal housekeeping. All of it is what the customer ultimately experiences.
Service providers make promises to clients. Clients make promises to their customers. The experiences those customers have reflect on every link in that chain. When AI is layered into that relationship, the potential for overpromising compounds. Organizations can now create experiences that imply capabilities they haven't yet built, or sell customers on outcomes that depend on organizational readiness that doesn't yet exist.
The brands that will earn durable trust in this environment are not the ones with the most sophisticated demos. They're the ones whose organizations have done the harder internal work—defined accountability, built guardrails, set realistic expectations—so that what the product promises is what actually gets delivered. The rest will be rebuilding their POCs in 6 months.
The distance between request and outcome is no longer hidden inside an organization's internal processes. It's visible. It's felt. And for organizations that took the previous confrontations in this series seriously—that built operational clarity before promising operational speed, that invested in human judgment rather than substituting for it, that designed for honesty rather than illusion—the expectation reset isn't a threat. It's the moment when the internal work becomes visible, and pays off.
This is the fifth and final part of a five-part series on the confrontations agentic AI forces organizations to face. Read the rest here Part 1, Part 2, Part 3, and Part 4.
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.
Traditional approaches to change management weren’t working before. AI just makes the gaps impossible to ignore.
How smart companies are evolving with agent-powered delivery models, and what it takes to lead in the new era of intelligent services.