A landing page in 10 minutes. 30 social posts in seconds. 50 headline variations before lunch.

These aren't projections. They're benchmarks from last Tuesday.

The uncomfortable math is already here: AI can generate 90% of what entry-level creatives do, faster and cheaper than any human ever could. The question isn't whether this changes the marketing agency—it's whether most people in the industry are willing to see clearly what it changes into.

Two narratives dominate the conversation. The first: AI will kill the agency. Automate the creatives, collapse the margins, race to zero. The second: AI is just a tool. Humans are still essential. Creativity can't be replaced.

Both are wrong. Both reveal more about the speaker's anxiety than about what's actually happening.

The tech determinists want a clean story—disruption, displacement, new world order. The human exceptionalists want reassurance—we're special, we're safe, the robots can't do what we do. Neither is grappling with the structural shift underneath.

Here's what's actually occurring: the locus of value creation is migrating. Not from humans to machines. From execution to judgment. From outputs to outcomes.

This distinction sounds semantic. It isn't. It's the entire game.

The Output Collapse

Let's name what's happening precisely, because precision enables thinking.

Output collapse: the phenomenon where AI compresses execution time so dramatically that the output itself loses economic value. What used to take days takes minutes. What used to require a team requires a prompt.

This has already happened to:

  • First-draft copy (any format, any length)

  • Design variations and iterations

  • Research synthesis

  • Strategy documents

  • Creative briefs

  • Social content calendars

  • Landing page builds

  • Email sequences

Notice the pattern. These are all artifacts—things you can point to, deliver, check off. They're outputs.

Outputs are now cheap. They're approaching free. And when something approaches free, it stops being a source of competitive advantage.

The agencies celebrating "AI-powered efficiency" are optimizing the wrong variable. They're getting faster at producing things that no longer differentiate. It's like celebrating a faster horse in 1910.

The Category Error

Here's where most analysis stops: "AI can do X, but humans still do Y."

This framing misses the deeper issue. It's not about capability gaps that will eventually close. It's about a category error in what AI operates on.

AI generates artifacts. Copy, designs, strategies, frameworks—these are objects in the world. They exist. You can look at them.

Outcomes are different. An outcome is a result—something that happens because of the artifact, in the world, over time. Did the landing page convert? Did the campaign shift perception? Did the funnel actually produce customers?

AI can generate a strategy. It cannot know if the strategy works.

This isn't a temporary limitation. It's structural. Knowing if something works requires feedback loops that extend beyond the generation moment. It requires understanding of context that isn't in the training data. It requires judgment about fit—does this approach match this market, this moment, this buyer's psychology?

The counterargument writes itself: AI will get better. It will incorporate feedback. It will learn context.

Perhaps. But consider what "better" means here. Better at generating artifacts that statistically correlate with past successes. Better at pattern-matching to what worked before.

Pattern-matching to yesterday's success is precisely the wrong move when conditions shift. The human who understands why something worked can adapt. The AI that pattern-matched to what worked will keep optimizing for a world that no longer exists.

This is why the output/outcome distinction isn't a temporary human advantage. It's a permanent category difference in what's being optimized.

Neither X Nor Opposite-of-X

When people realize "AI will replace humans" is too simple, they naturally conclude the opposite must be true—"humans are irreplaceable."

This is a very human tendency. When X doesn't work, we assume opposite-of-X does work. It doesn't.

The reality is messier: some humans become dramatically more valuable. Most become redundant. The question is which, and why.

The typical agency structure allocates roughly 10% of labor to strategy and judgment, 90% to execution. Senior people think, junior people do. This ratio made sense when execution was the bottleneck.

AI inverts this. Execution is no longer the bottleneck. A single person with the right AI workflow can generate what used to require a team of five.

So the new model becomes: 90% AI execution, 10% human judgment.

But here's what the efficiency narrative misses: that 10% is harder, not easier.

The 10% that remains is:

  • Refinement: knowing what to adjust and why, when the AI output is 80% right but somehow wrong

  • Override judgment: when to trust the AI and when to contradict it

  • The "this won't land" instinct: pattern recognition that operates below conscious articulation

  • Outcome accountability: not just generating the work, but guaranteeing the work drives results

These skills are rare. They were always rare. But the old model didn't require them to be concentrated—they could be distributed across a team, averaged out, compensated for by sheer execution volume.

The new model has no room for averaging. If your 10% is weak, your 90% of AI output is worthless. Faster mediocrity at scale.

What the 10% Actually Requires

Let's be specific about what "judgment" means, because vague appeals to human intuition are exactly the cope the human exceptionalists want to hide behind.

Brand resonance: AI can generate on-brand content. It cannot feel when something is technically on-brand but spiritually wrong. This requires having internalized the brand deeply enough that violations register as dissonance, not as checkbox failures.

Buyer psychology read: AI can model the ideal customer profile. It cannot know why this buyer isn't converting today. That requires synthesis of market conditions, competitive moves, timing, and dozens of signals that don't appear in any dataset.

Strategic validity: AI can produce a strategy that looks correct. But strategies don't fail because they're logically flawed—they fail because they assume conditions that don't hold. Knowing which assumptions are load-bearing requires experience in the specific domain, with the specific failure modes.

Taste: The most important and least definable. Taste is the ability to recognize quality before you can articulate why it's quality. AI can optimize for metrics. It cannot have taste. Taste emerges from accumulated exposure to what works and what doesn't, filtered through a sensibility that has been honed over years.

The common thread: all of these operate on judgment, not generation. They're about evaluating, refining, selecting, overriding. They're curatorial, not creative in the execution sense.

And here's the uncomfortable part: these skills are not evenly distributed.

The Segmentation Nobody Wants to Discuss

Most people working in agencies today were hired to execute. To write the copy, design the assets, build the pages, manage the campaigns. They got good at doing the work.

The AI transition doesn't need people who can do the work. It needs people who can judge the work. These are different skills. Often, different people.

The person who could write decent copy but couldn't distinguish great from good? Redundant.

The person who could design competently but had no instinct for what would actually convert? Redundant.

The project manager who kept things on track but added no strategic value? Redundant.

Meanwhile:

The creative director who couldn't execute to save their life but had killer taste? More valuable than ever.

The strategist who thought in outcomes rather than deliverables? More valuable than ever.

The account lead who actually understood the client's business, not just their briefs? More valuable than ever.

This isn't a story about "humans vs. AI." It's a story about which humans, doing what kind of work, remain valuable in an output-abundant world.

The 90/10 model doesn't save existing jobs. It creates different jobs that require different people. And the transition will not be kind to those on the wrong side of the segmentation.

The Exceptional vs. Average Problem

There's a pattern in how advice spreads through industries. The exceptional performers share what works for them. Everyone else adopts it. It fails.

Why? Because what works for exceptional performers often works because they're exceptional. Their judgment is good enough that they can "just build" or "just ship" or "just trust their gut"—and be right most of the time. Average performers following the same approach get average results at best, disaster at worst.

This applies directly to the AI-agency transition.

The exceptional performers—the ones with genuine taste, real strategic instinct, deep domain knowledge—will thrive in the 10% model. They were always providing most of the judgment anyway. Now they can leverage AI to amplify their impact dramatically.

But the advice that emerges from their success ("just use AI to handle the busy work and focus on high-value thinking") will be systematically misleading for average performers. Because average performers can't just focus on high-value thinking. They don't have the judgment that makes the thinking high-value.

This isn't elitism. It's honest acknowledgment that the distribution of these skills is uneven, and the new model makes that unevenness matter more than it ever has.

The uncomfortable question every agency needs to ask: how many of our people are actually equipped for the 10%? Not "could they learn it eventually" but "can they do it now, because the transition isn't waiting."

Faster Mediocrity Is Not a Moat

If every agency adopts AI, everyone gets the speed gains. Speed becomes table stakes, not advantage.

So where does competitive advantage migrate?

Outcome guarantees: The agencies that win won't sell deliverables. They'll sell results. Not "we'll build your funnel" but "we'll guarantee pipeline." This requires confidence in judgment that most agencies don't have—because they've never had to stand behind outcomes, only outputs.

Judgment quality: When outputs are commoditized, the differentiator is the quality of human judgment applied. "Our humans have better taste than their humans" sounds absurd until you realize it's the only remaining variable. The AI is available to everyone. The judgment isn't.

Orchestration sophistication: Not just using AI, but building workflows that compress the right things while preserving human touchpoints on the right decisions. This is systems thinking applied to the human-AI interface. Most agencies will bolt AI onto existing processes. The winners will redesign the entire operating model around the 90/10 inversion.

The agencies racing to advertise their AI efficiency are telling on themselves. They're competing on the commodity layer. They're announcing that they have nothing to differentiate on except speed—which means they have nothing to differentiate on at all.

The Work That Remains

Let me be concrete about what "agentic workflows with human orchestration" looks like in practice:

Research phase: AI agents synthesize competitor positioning, market data, customer reviews, industry trends. Human reviews for what's missing, what's misweighted, what pattern the AI couldn't see.

Strategy phase: AI generates multiple strategic options with rationale. Human selects, combines, modifies based on knowledge of client context, market timing, and things that aren't in any dataset.

Creative phase: AI produces variations—copy, design, formats. Human curates with taste, identifies what will actually land vs. what merely checks boxes.

QA phase: AI handles consistency, compliance, technical checks. Human makes the final call on whether it feels right.

Optimization phase: AI runs tests, surfaces patterns. Human interprets whether the winning variant won for the right reasons or is a local maximum that won't scale.

At every stage, the human isn't doing the work. The human is ensuring the work drives results.

This is fundamentally different from "using AI as a tool." Tools assist execution. This model has AI owning execution while humans own judgment. The relationship is orchestration, not assistance.

The Question Underneath

Most of what's written about AI and agencies focuses on implementation. How to use the tools. Which workflows to adopt. What skills to develop.

These are second-order questions. They assume the first-order question has been answered.

The first-order question is: what is your agency actually selling?

If the answer is deliverables—artifacts, outputs, things—then AI is an existential threat. Because deliverables are approaching free, and competing on a commodity is a race to bankruptcy.

If the answer is outcomes—results in the world, business impact, guaranteed performance—then AI is leverage. Because outcomes require judgment that can't be automated, and the agencies that can credibly promise outcomes will command premium positioning indefinitely.

But "we sell outcomes" can't be a rebrand. It has to be a restructuring. Different people, different skills, different pricing, different accountability. Most agencies aren't willing to make that transition. They'll bolt AI onto their existing model, get the speed gains, compete on price, and wonder why margins keep compressing.

Where This Leaves Us

The agencies that win the next decade will be lean teams orchestrating AI—not doing the work, but ensuring the work drives results.

They'll be built around humans with genuine judgment, taste, and outcome-focus. Not humans who can execute, but humans who can evaluate, refine, and guarantee.

They'll compete on a layer that AI cannot touch: the credible promise that the work will actually work.

Everyone else will compete on speed, which means competing on nothing at all.

The question isn't whether you're implementing AI. Everyone will implement AI. The question is whether you're building for outputs or outcomes—and whether you're honest about which of your people can actually operate in a world where outcomes are all that matter.

Most aren't. Most will discover this too late.

The transition is already underway. The window for intentional repositioning is closing.

What are you actually selling? And can you defend that answer when the output floor drops to zero?

Keep Reading

No posts found