Building Agents? Stop Treating messages[] Like a Database
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.

The workforce conversation around AI keeps circling the same question: Will it take jobs or create them?
The honest answer is both. It already has, and it will continue to. But that framing misses the more interesting shift: What happens to human expertise when execution becomes automated?
Some call it a redistribution of expertise. I see it as an expansion, not just of where expertise lives, but what it means.
In the past, employees were valuable because they knew the answer—domain knowledge meant knowing how to do the work. But AI is fundamentally changing how we use that expertise.
Now, it increasingly belongs to the person who knows the right question to ask, who recognizes when AI is wrong, and who understands what good actually looks like.
That's a meaningful, friction-inducing change, and it’s also the opening up of real opportunity.
This shift becomes much clearer in practice.
When I first started using AI seriously, the narrative was simple: automate the repetitive. I kept hearing, “If you do something three times, automate the fourth.” I did some of that, and the efficiency gains were real. I had more time for strategic work, which was genuinely useful.
But I didn't feel stronger. I felt like I was automating for the sake of automating.
The real unlock came when I started using it as a thought partner instead of just an automation tool. When I asked it to challenge my thinking, surface my blind spots, and pressure-test my assumptions. It gave me the push to think bigger.
Most productivity narratives miss this: moving from using AI to collaborating with it is the difference between getting answers and getting better. And that opportunity isn’t limited to a single role or industry. It’s available to anyone who approaches AI that way.
At Orium, we’re seeing designers beginning to simulate users, QA learning to evaluate AI-generated outputs, and entirely new capabilities emerging that we didn't know were possible six months ago. Our teams are collaborating with AI and increasingly directing systems that help produce the work, pushing the boundaries of what’s possible.
After plenty of trial and error, one thing is clear: the people who know how to work with AI effectively are the ones redefining what their roles can become.
But not every organization is giving its people the conditions to get there.
Organizations that deploy AI primarily as a cost-saving tool might achieve compliance, but they’ll rarely reach real transformation. Employees will use AI tools to go through the motions faster, focusing on metrics over judgment and depth, but the work itself doesn’t actually improve. When the bottom line is the only lens, employee experience becomes an afterthought, and that’s where you start to lose people—and the quality of the work along with them.
Instead, organizations that treat AI as a thinking partner and invest in their people will see stronger thinking, better decisions, and employees who feel more capable, not more replaceable.
There’s another risk that’s easy to miss. When AI takes on too much cognitive work, people can stop practicing the skills they once relied on. It’s subtle, but it compounds.
In his book Co-Intelligence: Living and Working with AI, Ethan Mollick warned about this dynamic: if people rely on AI too passively, they may lose the opportunity to develop the judgment and expertise that truly matter.
This is why how AI is introduced and reinforced matters. Not everyone will move at the same pace, and that’s expected. The real test is whether your people are learning to work effectively with AI, because those who do will be the ones designing, improving, and governing what comes next.
But that doesn’t happen without leaders who are actively shaping how their teams work with AI.
You hear a lot about change management when organizations talk about AI adoption. It matters, but it often becomes a blanket term assigned to a people team or a single champion, and then quietly deprioritized when the next initiative comes along. That's not change management—that’s just checking a box.
Real adoption requires active, visible participation from managers and leaders. And it starts with answering the question your employees are already asking, even if they're not saying it out loud: What's in it for me?
That question deserves a real answer.
I didn’t really grasp the impact of AI in my role until I saw how it could not only make me more efficient but also make me better at the parts of my job that matter most.
You’re not just asking people to learn a new tool, you're asking them to take on higher-order work that likely pushes them outside their comfort zones, all while the tools themselves are changing every few months (heck, every few days). That's a significant ask, and it deserves more than just a town hall or a lunch-and-learn.
In practice, this means leaders working with their teams to define what judgment means in their role. Not just what’s changing, but why their thinking matters more.
It’s managers having honest conversations about what the next year might look like and creating enough psychological safety for people to say "this isn't working for me yet" without it feeling like a performance issue. It’s also organizations rewarding the quality of judgment, not just the speed of output. Because if you only measure speed, you’ll get people who move faster and call it transformation, but nothing will fundamentally have changed.
Leaders also have to teach people how to work with AI properly. That means knowing how to direct AI systems, spot when it's wrong, challenge its outputs, and still weave your own thinking and uniqueness into the work.
The biggest risk isn’t that employees won’t adopt AI—most will—it’s that they’ll adopt it poorly, and quietly lose the critical-thinking muscles they think they no longer need.
The unlock lies in how people adopt it. Organizations that get this right won’t just use AI better. They’ll have teams that actually trust them, and that’s what separates transformation from compliance. Not the promise that nothing will change, but the assurance that the change is something you're navigating together.
When expansion becomes the norm and people routinely think bigger with fewer cognitive constraints, the structure of work itself starts to look different.
Because if the value of work is shifting from execution to judgment, it’s worth asking whether the systems designed to optimize output still make sense. Even long-standing norms, like the 40-hour work week, may be less foundational than we assume.
The organizations willing to rethink work at that level, not just the tools inside it, will be the ones that successfully figure out what comes next.
This is part three in a five-part series on the confrontations agentic AI forces organizations to face. Read the rest here Part 1, Part 2, Part 4, and Part 5.
Stop using messages as your agent's memory. Learn how structured state makes AI agents more reliable, efficient, and production-ready.
Traditional approaches to change management weren’t working before. AI just makes the gaps impossible to ignore.
How smart companies are evolving with agent-powered delivery models, and what it takes to lead in the new era of intelligent services.