AI made me 10x faster. I've never been more behind.
What is the real productivity number?
I run a lot of AI agents.
Reporting agents, research agents, content agents, and data analysis agents — all producing work at a speed that would’ve been unthinkable six months ago. My output has never been higher.
Yet I’ve never had more loose ends.
Projects half-finished. Drafts that need a second pass I haven’t gotten to. Agent output sitting in a queue, unsupervised, slowly going stale. The work gets started at 10x speed. It doesn’t get finished at 10x speed, because finishing still requires me.
Turns out I’m not alone. Workday just published a study of 3,200 employees and found the number nobody in AI marketing wants you to see: for every 10 hours AI saves you, you lose nearly 4 hours fixing the output. Only 14% of workers consistently come out ahead after rework.
That’s a 37% tax on your AI productivity gains. And nobody’s talking about it.
The bottleneck has moved. It used to be production — not enough hours to do the work. Now it’s supervision — not enough hours to *review* the work. AI agents produce at machine speed, but validation still happens at human speed. The result is a growing pile of 80%-done work that compounds until something breaks.
Tom Tunguz, a VC and frequent commentator on these sorts of things, admits he can “barely manage 4 AI agents at once.” Half the output gets thrown away and restarted with better prompts.
AI4SP, a consultancy specializing in enterprise-grade generative AI implementation, hit the same wall at 12 agents per human manager - even with a “supervisory framework.”
This is the new management problem for anyone running a sales or marketing operation. You can spin up agents to generate leads, write sequences, score prospects, reactivate aged databases — and every one of them needs someone watching the output.
The question isn’t “how many agents can I deploy?” It’s ”how many agents can I actually supervise without quality collapsing?”
The answer, based on early data, is probably fewer than you think.
THREE LINKS WORTH YOUR TIME
1. The Rise of the Agent Manager — Tom Tunguz — The best piece written on the supervision problem. Tunguz maps out what “managing” an AI agent actually looks like and why span of control theory breaks down.
2. Workday: Companies Are Leaving AI Gains on the Table — The January 2026 study behind the 37% rework stat. Worth reading for the methodology — they surveyed across roles, industries, and company sizes.
3. The Unsupervised Agent Myth — AI4SP — How a consultancy running 60 agents with 5 humans built a supervision framework. Includes the tier system for classifying agent oversight levels.
ONE TACTIC TO TRY THIS WEEK
Classify your agents by supervision tier.
Pull up every AI tool and agent your team uses. Put each one in one of three buckets:
Green (low risk, reversible): Spot-check 10% of output. Meeting summaries, research pulls, internal data work.
Yellow (client-facing or revenue-impacting): Review everything before it ships. Email sequences, blog posts, ad copy.
Red (high-stakes, irreversible): Human approval required. Pricing, contracts, compliance content.
You’ll find agents that can be set into an autonomous mode with a little extra training and a bunch of agents that need a more robust supervisory process to finish the work.
That’s all for this week.
Bill

