AI Is Coming for the Middle
The barbell economy of everything has arrived, including the self.
👋🏼 Thanks to the wonderful folks at APIT for last week. What a great event. And to all those at Cannes Film Festival this next weekend, say hi.
At this point, I’m averaging 20 to 25 AI keynotes a year, split between public stages and private rooms. The job teaches you something quickly: the panic is the same in both places, but only one version is allowed to tell the truth.
In public, the brief almost always arrives with the same emotional instruction attached: be positive and do not scare the room.
In private, the brief is far more direct: help us stay alive, help us stay relevant, and tell us which parts of the model, the team, the margin, the career path, the audience relationship or the creative process are already more fragile than we can say publicly.
I understand both instincts. Doom is boring and panic is useless. Nobody needs another man in a quarter-zip explaining that civilisation ended because a chatbot can write a passable press release (I would never wear a quarter-zip!). But the gap between the public brief and the private fear is now the whole story. Positivity has become a tax on honesty.
This is a problem, because optimism that cannot admit destruction is just a sedative for people standing in the blast radius. And the first thing AI will destroy is not work itself, but the premium we used to place on sounding competent.
Things will be destroyed.
That is not a failure of the transition; it is the transition. Hear me out.
Not creativity. Not work. Not human meaning. Not the whole dramatic menu of bullshit stage apocalypse topics. But some jobs, workflows, companies, habits, hierarchies, business models, creative shortcuts and whole categories of professional surface competence will be destroyed. Some already have been. Ask anyone in leadership today and they’ll tell you the same thing: job eliminations today don’t look like casualties on first glance. They disappear first as unfilled roles, no automatic back-fills, cancelled freelance/contractor budgets, hiring freezes, smaller teams, and entry-level positions being eliminated.
This is a common thing people try to soften. “AI will not replace jobs, it will replace tasks”. Fine. That is a useful sentence until enough of the tasks make up the economic centre of the job. At that point, the distinction becomes a morale device.
The truth is simpler: if your value sits mainly at the surface, AI is coming for you first.
Surface fluency has had a very good run (and kept so many wonderfully entitled and privileged people at the top of the ladder). The ability to sound right. To make the deck look grown-up. To summarise without understanding. To write the thing that says all the expected things in the expected order. To play the line. To keep the process moving. To produce the acceptable version. To participate in strategy-shaped activity without ever touching the dangerous territory of a point of view.
Turns out, AI is brutally good at surface.
It can produce competent average at a scale human mediocrity could only dream of. More emails. More decks. More posts. More briefs. More scripts. More plans. More logos. More strategies with no actual substance. More essays that understand structure but don’t actually say anything. More songs, images, presentations, campaigns and business ideas that look finished, sound plausible and die immediately on contact with reality.
This is the paradox: AI industrialises mediocrity while destroying its value.
That is why the usual productivity frame is too small. Productivity asks how quickly we can do what we already do. It turns AI into a calendar-saving tool, a meeting-note servant, a slightly creepy intern with no sleep schedule. Useful, yes. Sufficient, no.
The bigger shift is not productivity, but the barbell economy of everything… including you and me.
At one end: cheap competence, fluent slop, automated sameness, the infinite mid. At the other: taste, judgement, talent, domain depth, courage, originality, refusal, authority, lived experience, actual point of view. The middle gets crushed because the middle was always over-reliant on being difficult to produce.
Once “pretty good” becomes instant, pretty good becomes worthless. Work will move, not disappear. Producing the thing gets cheaper. Knowing whether the thing deserves to exist gets more expensive. The first draft arrives instantly, which is precisely why judgement becomes more valuable. You still have to know whether it is true, useful, original, defensible, legal, safe or worth another human being’s attention. AI makes the appearance of completion cheap, but it does not make completion itself cheap.
The AI divide is no longer about who has tried the tools (and my god, please stop talking about how to train people on prompting). Enough people have tried the tools that trial itself has become a useless proxy. Half the people who say they use AI still treat it like a haunted search bar with copywriting ambitions. The real divide is agency. Some people are using AI to complete tasks faster. Others are using it to become capable of things they could not previously attempt. That difference compounds.
The person leaning in now is not merely saving time. They are learning how to direct ambiguity, how to decompose problems, how to test ideas, how to build prototypes, how to interrogate output, how to recognise quality, how to reject something that sounds intelligent-enough but upon review is, in fact, hollow. They are building taste under acceleration.
The person hovering at the surface is also compounding, just in the other direction. They are learning to outsource voice, outsource judgement, outsource effort, outsource uncertainty, outsource the small humiliating friction through which a person becomes less generic. (Has anyone else noticed how obvious this is becoming as of late? Because it’s really, really obvious.)
Argument #302 I am tired of: AI makes people generic. It does not. It gives generic people industrial capacity.
That is also something nobody ever wants me to say in one of my very positive public keynotes. The danger is not that AI replaces all human creativity (what a boring, stupid argument). AI’s danger is that it allows people without taste, talent or point of view to produce at a volume that used to require institutions. It gives the surface class a factory. The surface class is a posture: people and organisations that learned to optimise for the appearance of value rather than the substance of it.
For a while, this will look like progress. More output. Faster turnaround. Higher volume. Lower cost. Someone will say “content velocity” in a meeting and everyone will think they’re doing a great job.
Then the market will adapt. Audiences will become more allergic. Clients will become less impressed. Employers will stop paying a premium for competence that can be summoned in seconds. The culture will grow suspicious of anything that arrives too smoothly. Taste will become more valuable because fluency will become cheap.
Culture will win
This is also why I am more optimistic about culture than the current panic allows. Every format is terrified of the thing below it. Film is scared of TikTok. Television was scared of YouTube. Music was scared of files, then streams, then fifteen seconds of a song becoming more powerful than the song. The fear is always that the new thing will flatten the old thing into speed, cheapness and appetite. Sometimes it does. For a while, the worst version of the new medium usually arrives first.
But slop is not the end state. Slop is the noisy birth defect of a medium before artists learn what to do with it. The good work will not rise because the algorithm becomes kind. It will rise because audiences get more allergic, artists get sharper, and the medium stops showing off and starts becoming language. The true voices will still matter. Possibly more than before. Not because AI will protect them, but because a culture drowning in synthetic competence will become desperate for anything authored and specific.
Blind positivity does not prepare people
And this leads me to my point (well, another one). The “keep it positive” brief, in this way, becomes actively cruel. Blind positivity does not prepare people. It simply flatters them into delay.
Yes, new jobs will be created. They always are. But new jobs do not arrive as compensation prizes for people who waited politely by the wreckage of the old ones. They are created around new behaviours, new tools, new forms of judgement, new risks and new kinds of leverage. By definition, the new jobs this AI economy will create will favour the people who have already been leaning in.
That is how technological transitions work. The future does not distribute itself evenly to those who were most reassured by the past.
This does not mean everyone has to become a coder (just ask all the comp-sci majors looking for entry-level work). That is another lazy anxiety. Code is only one expression of the shift. The more important skill is direction. Knowing what to ask. Knowing what matters. Knowing what good looks like. Knowing when the machine is bluffing. Knowing when the output is merely fluent. Knowing when something has a pulse. Taste, here, does not mean knowing which hotel lobby has the correct chair (even though I think about this often). It means being able to recognise value before consensus, quality before polish and deadness before the Board does.
The machine can forgive your lack of syntax. It is much less forgiving of your lack of taste. Work will split around that fact. So will culture. So will the self.
At the bottom of the barbell, people will become smoother, faster and emptier. They will produce more than ever while saying less than ever. They will mistake output for identity. They will confuse polish with talent.
At the top, people will become more specific. More dangerous. More authored. They will use AI to test obsessions, build private systems, accelerate learning, sharpen craft and make stranger things with less permission. They will not use the machine to avoid having a point of view. They will use it to put pressure on the one they already have.
This is the actual optimistic case: Not that everything survives. Not that everyone is fine. Not that the transition is gentle. The optimistic case is that agency is more available than before, but it has to be taken seriously.
The positive story is not “no one loses.”
The positive story is that people can move before they are moved.
So no, I do not think we should be blindly positive about AI. We should be soberly optimistic, which is much harder and much less popular. We should tell people that things will be destroyed. We should tell them that surface competence is no longer a safe career strategy. We should tell them that taste matters more now, not less. We should tell them that leaning in is not a lifestyle posture. It is how you avoid becoming part of the automated middle.
AI is coming for mediocrity from both directions. It will flood the world with it, then make it impossible to charge much for it. This is, arguably, the best news in the whole mess.
The brief should never have been “keep it positive.” The brief is: tell the truth while there is still time for it to be useful.

