Agentic AI: When Software Starts Acting Like an Employee

Agentic AI may absolutely feel like a co-worker in many workflows, but treating it like an employee instead of like infrastructure with agency is how companies create invisible risk, cultural damage, and long-term fragility.

The new year, 2026, has begun with another alluring notion circulating through boardrooms and tech conferences: that artificial intelligence agents will soon operate like full-fledged workers. Not assistants. Not tools. Employees - those who report to work, solve problems, take judgment calls, and deliver completed outcomes without being told how to do it.

It’s a compelling proposition for any CIO facing a backlog of transformation initiatives and limited headcount. But does this represent a viable path forward, or is it simply a vision fuelled by the promises of AI suppliers ?

As of early 2026, the truth is that we are midway through the journey. There have been great advances in agentic AI. The current AI agents are capable of automatically writing and analysing code, executing multi-step processes, performing QA, and even structuring other agents in a process that may be likened to a team. Companies such as Capital One and PepsiCo have moved beyond pilots, reporting significant gains from production-grade deployments. Gartner predicts that by 2028, one-third of enterprise software may have agentic capability, and 15% of routine work decisions may be made independently. 

That sounds impressive and it is.

But there’s a canyon between an agent that can perform a well-scoped task and an agent that can truly substitute for a human employee, someone who can think, adapt, and take responsibility for outcomes.

In a recent study, the Center for AI Safety ran trials with leading AI agents on 240 real-world freelance jobs. While the agents were able to get started on most tasks, they failed to complete the work at a level that clients found professionally acceptable.

The gap isn’t about raw capability. It’s about context and judgment, understanding what really needs to be done in a messy, real-world business setting. AI can begin the work, but finishing it well, in context, is still a very human skill. As Eliezer Yudkowsky has warned, “The danger of AI is not that it is too intelligent, but that people assume it is.”

This is of paramount importance for CIOs who have to make investment decisions. Over the next three to five years, agentic AI may become the brilliant junior colleague, the one that never sleeps, handles grunt work at inhuman speed, and frees up your best people to think at higher levels. What it will not be is a substitute for the senior architect who spots design flaws before they become obvious, or the project manager who can read the room in a tense stakeholder meeting. Software is increasingly able to act. But acting like an employee requires familiarity with organizational politics, comfort with ambiguity, and sensitivity to the thousands of informal cues that shape real workplace decisions. We’re still years away from that.

This brings us to the elephant in every conference room: the fear that these agents may consume jobs in bulk. The panic is understandable and not entirely baseless. 

In 2025, news headlines focused on AI-related retrenchments at companies like Salesforce, Amazon, and UPS. Challenger, Gray & Christmas has tracked roughly 55,000 job losses in the U.S. in a single year that are associated with AI. For anyone paying attention, this threat feels personal and immediate 

However, this is where the story needs a reset. What we’ve seen so far in real enterprise deployments suggests that agentic AI is reshaping roles far more than it is eliminating them. You might reduce a third of a support team, but the remaining team is left with more complex, higher-value cases to handle. At the same time, companies are hiring AI operations specialists and people to supervise and guide these agents. The vision isn’t one of disappearing work, it’s one of shifting work.

Analysis from Deloitte shows that organizations succeeding with agentic AI treat agents as a new kind of workforce that needs to be managed, not just as a blunt instrument for cutting headcount. The CIOs who are getting this right aren’t asking, ‘How many people can I replace?’ They’re asking, ‘How do I redesign my teams so people and agents each do what they do best? 

A helpful way to look at this is through the lens of history. Every major wave of workplace technology, from spreadsheets to ERP systems, to cloud computing came with predictions of mass unemployment. What actually happened was far less dramatic: jobs changed, new roles emerged that no one had anticipated, and the professionals who adapted the fastest gained the most. The same pattern is playing out with agentic AI, just at a much faster pace. Roles like AI Agent Supervisor, Human - AI Workflow Designer, and Agent Governance Analyst barely existed eighteen months ago. Today, they’re already showing up in job postings. 

None of this is to say the transition will be painless. Entry-level roles are under the most pressure, and organizations have a real responsibility to invest in reskilling before they invest in replacing people. CIOs who treat AI adoption as a purely technical initiative may end up with powerful tools and deeply demotivated teams. The other half of the equation is culture, training, and honest communication.

My take is that Agentic AI may absolutely feel like a co-worker in many workflows,
but treating it like an employee instead of like infrastructure with agency is how companies create invisible risk, cultural damage, and long-term fragility.

Empower your business. Get practical tips, market insights, and growth strategies delivered to your inbox

Subscribe Our Weekly Newsletter!

By continuing you agree to our Privacy Policy & Terms & Conditions