Andy Cabasso's story is easy to misread.

He built 37 AI agents, gave some of them memorable personalities (one calls him "Captain" aboard the USS Enterprise; another runs on the persona of FBI hostage negotiator Chris Voss), and turned them loose on pieces of his day.

The real story has less to do with the count of agents than with what happened to Andy's job because he built them.

That distinction matters. Plenty of professionals are experimenting with AI. Far fewer are redesigning their work around it in a way that holds up across months.

Cabasso's setup makes one emerging type of knowledge worker unusually visible: the operator who is no longer just solving problems, but actively designing, delegating, and managing work across a portfolio of AI agents.

My work has fundamentally changed and is continuing to change. A year ago, I'd spend a good amount of time pulling reports or building tables, and now I have agents doing all of that.

— Andy Cabasso, Growth Operations Manager at ClickUp

They aren't just automating tasks. They're managing machine labor inside a normal job.

For leaders, the question is no longer whether talented employees will use AI to move faster. They already are.

The question is what happens to management, culture, and performance when more of your best operators begin to work this way?

Work Starts to Change When Repeated Tasks Get Reassigned

A strong operator used to prove their value through range and memory: carrying context across meetings, chasing follow-ups, stitching together messy information, holding loose ends nobody else had time to track.

Much of that work happened quietly and rarely showed up in a clean metric.

As repeated tasks split into specialized lanes, operators like Cabasso start to shift from doing the work manually to directing how it moves.

When agents enter the picture, that repeated work can be pulled apart and reassigned. Scheduling moves to one system. Transcript review moves to another. Early pattern-spotting can happen before the human steps in.

"I spend most of my day orchestrating agents, reviewing their outputs, and giving further directions and when needed, improving or iterating on prompts to help them get better outputs,” says Cabasso.

What stays with the operator is the judgment that holds the whole thing together.

The role becomes less about manually doing every step and more about deciding what belongs where, what needs review, what can move quickly, and where a human still has to make the call.

Most companies are only starting to name that skill.

The Important Part Has Little to Do With 37 Agents

It’s easy to get stuck on the number of agents Cabasso built for work. It's also a distraction if you stop there.

The more useful question is what kind of employee builds a system like this in the first place?

And what does that tell us about where work is headed?

As a new class of operators turn scattered tasks into a directed system of agents, their roles start to look more like team managers.

Cabasso treats his workflow as a set of roles. Once work is seen that way, the person's role starts to shift. They aren't simply executing. They are scoping, checking, correcting, and improving a system.

That starts to look a lot like management. The hard questions follow quickly.

Which functions can run with light review? Which need human sign-off every time? Which parts of the system hold up under pressure?

Those are management questions. They are just showing up inside roles that still look like ordinary operator jobs from the outside.

That is why Cabasso is worth studying. He gives leaders a clear view of a role that is already changing before the org chart has caught up.

Human-to-Agent Chat Data Shows a Bigger Shift

Cabasso's case would be interesting on its own. It matters more when you place it next to ClickUp's own internal communication data.

A dashboard comparing human-only messages with agent-involved messages shows a clear rise in agent participation over time.

  • In mid-April 2025, agent-involved messages were effectively at zero. The first non-zero week was May 12, 2025, at 1.6%.

  • By the end of Q4 2025, agent-involved messaging had averaged 21% across the quarter.

  • Through Q1 2026, it averaged 31%. The four-week trailing average now sits at 36%.

Agent-involved chat messages have increased exponentially since May 25’, while human-only messages have decreased.

Source: ClickUp Workspace Chat Data 2025-2026

By the week of April 20, 2026, agent-involved messages hit 104,600 against 177,426 human-only, with agents in 37% of conversations and the highest single-week agent volume in the dataset.

The point is that the agent layer is no longer marginal. Cabasso's workflow reads like an early, legible version of a broader behavioral change.

Once that becomes normal, companies have to think differently about output, status, accountability, and role design.

*One caveat for anyone reading the chart closely: the December 22 outlier of 38.9% is a holiday-week artifact, not a breakthrough number. Total messages collapsed to about a quarter of normal because most of the company was offline. The cleaner read is the steady climb through Q1 and into April.

Trust Is Where This Gets Harder

Anyone pushing meaningful work through an agent layer needs a trust model, whether they call it that or not. They need to know where context gets flattened, where errors hide, and which categories of work need closer human review every time.

Cabasso has worked some of this out for himself. He runs reviews. He retires agents that drift. He has rules about which agents get personalities and which don't.

"I'm not going to give any analytics-focused agents any personalities where they could possibly go rogue," says Cabasso.

Once machine output enters the workflow, trust stops being abstract and becomes a daily question of review, judgment, and risk.

The cultural consequences extend beyond accuracy. When more work is drafted, routed, summarized, or shaped by agents, norms around authorship and effort start to move.

Once some workers become visibly more amplified than others, managers have to deal with the human consequences. Questions about competence, fairness, and recognition show up right away.

What Leaders Should Learn From This

The wrong lesson is to tell everyone to build more agents.

That usually creates a mess: more output, less clarity, more variation in quality, more invisible risk. The right lesson is to study the discipline underneath the behavior.

Cabasso's setup works because the right tasks moved off his desk. The best early candidates for delegation are recurring tasks that demand attention without requiring fresh judgment every time: scheduling, recaps, follow-up drafting. These are also tasks where review is fast.

Cabasso can scan an agent's output and either ship it or send it back without redoing the work himself.

Judgment stays with the human. The strongest operator can name precisely what the system should handle and where a person still has to make the call. Removing yourself from the process is not the goal. Review is part of the design, not cleanup after the fact.

Most companies still reward effort that looks visibly labor-intensive. That instinct is going to age badly. When some employees produce significantly more value because they have built better systems around themselves, the legacy reward signal misfires.

If the entire setup depends on one unusually motivated person, the company has an interesting outlier. It does not yet have a repeatable model.

What To Do This Quarter

Three concrete moves for executive teams reading this.

Find your power users. Every company that has rolled out AI agents has people quietly doing some version of what Cabasso is doing. Find them. Ask them to walk leadership through what they've built and how they use it day to day.

The exercise will tell you more about your real operational leverage than any productivity dashboard you currently look at.

Study the discipline underneath. Don't just count agents. Map where these operators delegated, where they kept human judgment, how they built review into the workflow, and what they retired when something stopped working.

The pattern is what other employees can learn from. Counting agents doesn't help anyone replicate what made the setup work.

Codify the playbook. Turn one person's idiosyncratic system into something the company can teach, copy, and improve on. Write down the rules someone like Cabasso has worked out by hand: which work goes to agents, which stays with humans, where review sits. That documentation is how an outlier becomes a capability.

Cabasso put it best himself:

It's a new kind of job, and I don't think my role is disappearing. It's just shape-shifting.

The shape of operator work has already started to change. The shape of how leadership sees that work needs to change with it.

Keep this going

Forward this to a leader navigating the same shift. That's how Work After AI grows.

Subscribe if someone sent this your way. workafter.ai/subscribe

Work After AI is a media outlet partnered with ClickUp, reporting on how AI is reshaping work, teams, and organizational performance. 1–2 pieces a month.

— The Work After AI team

Keep Reading