
Building Agents? Stop Treating messages[] Like a Database
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.
Change management has long played a supporting role—comms, training, and checklists meant to ease people into new tools—with a low bar for success. Check the boxes, launch system, and move on. Good enough. But good enough came at a cost.
Teams adopted software without fully using it. Workarounds became the norm. Strategic goals like efficiency, insight, and speed were quietly diluted by partial adoption and quiet resistance. Change happened, but results were never fully realized even as millions were spent on platforms and integrations— not because the tech failed, but because the human transition was underpowered or ignored.
Now, AI is raising the stakes on transformation.
Intelligent agents don’t just streamline tasks, they change how decisions are made, how roles are structured, and where control lives. They act more like semi-autonomous teammates than tools, influencing outcomes in ways that feel personal. Adopting them is less like rolling out software and more like adding a new hire— one who never sleeps, knows more than most, and in the eyes of many, wasn’t invited.
That kind of change hits differently. It needs to be treated like a true team shift, with clarity, shared expectations, and trust. Without that, resistance sets in.
Traditional change management breaks down in the face of AI and intelligent systems. It assumes change happens after the system is built: a rollout phase, a round of training, and a handoff to enablement. But with AI and agents, the hard part isn’t getting people to use a system. It’s designing the system so people want to use it, because it fits how they work, earns their trust, and clarifies (not confuses) their role.
That can’t be retrofitted. Change has to shape how the thing is built from the beginning.
AI-led transformation fails when people don’t trust it, don’t understand their role, or can’t see how it improves their work. Those foundations—clarity, confidence, trust—must be baked in early. No training or comms sprint can fix what wasn’t considered upfront.
AI doesn’t just change tools—it changes decisions, roles, and control. That brings fear. And fear shows up long before launch day— in uncomfortable meetings, pronounced silences, and hesitation to commit to decisions. If you don’t design for trust throughout, you’ll spend your time managing resistance after the fact.
Treating change as a core constraint doesn’t mean adding extra meetings or documentation. It means asking better questions at key moments, ones that shape the build itself. The earlier these questions surface, the fewer surprises downstream.
What’s changing for the people this system touches, and how materially? If the answer is “a lot” (and it probably is) treat those changes like features: scoped, validated, and tracked. Map old vs. new workflows. Run role clarification workshops before development starts. Build in readiness checkpoints. Create frontline feedback loops that shape the system in real time.
Who’s losing control, visibility, or decision rights? These people are often gatekeepers and their quiet resistance can sink adoption. Identify friction early: who used to approve something that the agent now handles? Bring them into plan reviews. Document what’s shifting, and clarify how governance will work going forward. Don’t gloss over the power dynamics, deal with them directly.
Where is the team skewing negative and how do we rebalance? With AI and agents, teams often fixate on what might break, who might lose, or why it won’t work. That caution has a place, but it can drown out possibility. Run pre-mortems and “pre-successes.” Document risks and unrealized benefits. Make sure the loudest voices in the room aren’t just the most skeptical.
How will this land in day-to-day work? Deploying agents isn’t just a technical implementation, it’s a form of org design. You’re introducing a new actor into the system, and that requires the same rigor you'd apply to defining a human role. What’s the agent accountable for? Where does it hand off? What does collaboration look like? Use tools like RACI models or workflow diagrams and adapt them to include agents. When expectations are explicit, teams are far more likely to engage rather than resist. Uncertainty is often scarier than the change itself.
How transparent are we being about what’s changing and what’s still uncertain? People don’t need false certainty. They need honesty. Leaders should acknowledge that AI-driven change is complex, unpredictable, and often uncomfortable. That includes naming the fears, recognizing the workload, and admitting when all the answers aren’t there yet. What builds trust isn’t perfection, it’s a clear commitment to communicating openly, often, and early.
At Orium, we don’t treat change management as a separate service. We treat it as a mindset that informs how we build. That mindset starts early: during discovery, during architecture, during delivery. We’re not layering change on top of the system. We’re shaping the system so change can succeed.
We surface role impacts early and define them explicitly— not just for humans, but for agents too. We document what’s changing, for whom, and how success will be measured from a people perspective, not just a platform one. And we make space for transparent, ongoing communication. We acknowledge the fear, the friction, and the uncertainty, and we show how feedback is actively shaping the build.
We stay alert for early friction—unclear ownership, quiet pushback, or workflow cracks that get glossed over in planning but surface during execution. These aren’t annoyances to smooth over later, they’re signals. And we treat them that way, adjusting course before small problems become cultural resistance.
Most importantly, we partner directly with leaders as they navigate this change. Sometimes that means offering an outside perspective. Sometimes it’s helping execs make sense of what they’re hearing from their teams. And sometimes, it’s just being a sounding board. Change is hard—even when it’s wanted—and we help leaders carry that load.
AI projects come with big expectations and a degree of uncertainty. At Orium, we help leaders manage both by making change part of how the work gets done, not an afterthought.
AI brings out a lot of feelings: good feelings like enthusiasm and curiosity, but also negative feelings like fear, uncertainty, and even grief. That’s always been part of change management, but with AI, it’s amplified, because the change isn’t limited to a new tool or a single workflow. It’s all around us. People are navigating it in their jobs, in the news, and in their careers. They’re not just reacting to this project, they’re reacting to everything AI represents in their lives.
As AI-supported development makes it faster to build and deploy solutions, the work of transformation shifts even more heavily to people, workflows, and adoption. The tech can move quickly, but it’s the human system that needs the most attention— and the most intention.
Change management in the age of AI isn’t a checklist. And it’s not just a comms plan or a training module. It’s a mindset that has to shape how we scope projects, define roles, make decisions, and measure success. Teams that treat it that way move faster, adopt more fully, and avoid the quiet resistance that stalls transformation.
If you want adoption, build for it. If you want people to embrace agents, help them see where they still belong. If you want real, lasting change, put the people in the system at the center. Leave them out, and the system fails. It’s as simple as that.
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.
How smart companies are evolving with agent-powered delivery models, and what it takes to lead in the new era of intelligent services.