The Consulting Pyramid Is Breaking and McKinsey Just Admitted It

The head of the world’s most influential consulting firm has now said the quiet part out loud. McKinsey still sells judgment, transformation, and institutional authority. But it no longer relies on human labour alone to produce them. Alongside its consultants now sit vast numbers of artificial intelligence agents: software systems capable of autonomously researching, modelling, drafting, and synthesising work that once defined the junior professional class.

What this essay is about: This essay examines how artificial intelligence agents are dismantling the traditional consulting pyramid, hollowing out apprenticeship systems, and forcing a new settlement around liability and institutional authority. It explains why consulting firms will survive automation while many adjacent institutions will not, and why the real risk is not job loss but the erosion of training, judgment, and accountability. This matters now because modern societies rely on professional institutions to make complex decisions legible, defensible, and governable. When work scales faster than responsibility, those institutions either adapt—or fracture.

For more than a century, elite professional firms have been built around a simple structure: a wide base of junior labour, a narrowing funnel of experience, and a small apex where judgment, liability, and authority reside. Consulting firms perfected this model. Juniors did the first and second passes. Managers refined. Partners decided and signed.

That structure is now collapsing.

What is breaking is not demand for advice, but the economic logic that required armies of junior professionals to produce it. When analysis, benchmarking, modelling, and synthesis can be generated instantly and repeatedly by artificial intelligence agents, the traditional ladder of progression ceases to function. The cost structure implodes. The training pathway follows.

This is not a cyclical downturn or a technological upgrade. It is a structural rupture in how professional judgment is produced, transmitted, and legitimised.

The end of the consulting pyramid

For half a century, the consulting industry was built on an industrial logic. Recruit large numbers of bright graduates, sell their hours at a premium, and let a narrow partner class capture the margin. Clients believed they were buying insight. In practice, they were buying organised labour, industrialised analysis, and political permission to act.

McKinsey’s public acknowledgement that it now operates with tens of thousands of artificial intelligence agents alongside human staff is not a marginal efficiency gain. It is a structural break. When first-pass analysis, benchmarking, modelling, and synthesis are performed by artificial intelligence agents at near-zero marginal cost, the economic justification for a large junior layer disappears.

What remains is a thinner organisation focused on client access, framing, political judgment, and accountability. The pyramid inverts. Fewer people do more, supported by swarms of artificial intelligence agents that never sleep, never forget, and never bill overtime.

This is not the end of consulting. It is the end of consulting as a mass apprenticeship system.

The apprenticeship crisis and the hidden training debt

The quietest risk in this transition is not job loss. It is training debt.

Professional services were never just labour markets. They were skill-transfer machines. Junior consultants learned by doing real work on real clients, under pressure, with consequences. That experiential learning produced the next generation of senior judgment.

Artificial intelligence agents now perform much of that formative work faster and better than humans ever could. The result is a paradox: firms become more productive in the short term while hollowing out their future expertise.

If junior staff no longer conduct first passes, build models, or draft core analyses, how do they learn to judge quality, risk, and context later in their careers?

Firms are already experimenting with synthetic substitutes: simulated cases, replayed crises, historical counterfactuals. These are, in effect, AI flight simulators for professional judgment. They are useful. They are also imperfect. Simulation teaches pattern recognition, not responsibility.

The industry is quietly borrowing competence from the future to fund efficiency today. That is not a moral argument. It is a structural one. Training debt, like financial debt, compounds invisibly until it surfaces as fragility.

Why consulting will survive artificial intelligence agents

Despite repeated predictions of its demise, consulting will not disappear. It will contract, concentrate, and harden.

The reason is simple: consulting sells judgment under liability.

Artificial intelligence agents can generate strategies, scenarios, and recommendations. They cannot be cross-examined. They cannot be sanctioned. They cannot lose professional standing. When decisions carry regulatory, financial, or political consequences, clients still require a human institution to stand behind them.

Consulting firms will increasingly resemble underwriting houses rather than labour brokers. Their value will lie less in analysis and more in framing, assurance, and reputational insulation. The client is not buying intelligence. They are buying someone who can be blamed.

This is why firms that understand this shift early will survive, and those that cling to labour-intensive billing models will not.

The liability gap

The next fault line is legal.

Professional indemnity frameworks assume human authorship and human control. A strategy produced by thousands of artificial intelligence agents complicates that assumption. If a recommendation fails catastrophically, responsibility becomes diffuse: between the firm, its partners, its systems, and the artificial intelligence agents embedded within them.

This is not an abstract concern. It is already reshaping insurance, governance, and internal controls. Firms will respond not by abandoning artificial intelligence agents, but by reorganising risk: tighter sign-off structures, clearer human attribution, and expanded professional indemnity coverage designed to absorb machine-generated error.

The firms that survive will be those that understand liability as their core product, not an inconvenient afterthought.

The institutions that will not survive

Institutions built on volume processing, procedural labour, and internal opacity are the most exposed. Where artificial intelligence agents can outperform humans and no meaningful liability attaches, the institutional premium collapses.

What disappears is not expertise but institutional slack. Layers that existed to move information rather than decide will thin rapidly. Organisations that confuse headcount with authority will discover the difference too late.

Rebuilding institutions after automation

The future institutional landscape will be smaller, sharper, and more explicit about where judgment resides.

Artificial intelligence agents will do the work. Humans will do the deciding. Institutions will exist to assign responsibility when things go wrong.

This is not a post-work world. It is a post-apprenticeship one. The challenge for professional society is not whether artificial intelligence agents will replace labour. They already are. The challenge is whether we can rebuild institutions that transmit judgment, absorb liability, and train future decision-makers in a world where the work itself has vanished.

That question remains unanswered. But the firms already redesigning themselves around it will set the terms for everyone else.

You might also like to read on Telegraph.com

AI, Manipulation, and the Strange Loop
How persuasion becomes the real hazard: not killer robots, but systems that learn to steer human emotion at scale.

Artificial Intelligence in China: A New Law Forces Transparency Between Humans and Machines
China moves from hype to governance: a legal framework that treats machine generated output as a disclosure problem.

The Great Divide: What Stays Human, What Gets Automated
A sorting mechanism is forming: which roles remain meaningfully human, and which collapse into agent executed workflows.

Sam Altman and the Shape of the Future
The new power map: capital, compute, regulation, and narrative control converge around a small number of gatekeepers.

China Bets on Discipline in AI Race, as U.S. Rushes Toward General Intelligence
Two strategies, two risks: centralised constraint versus acceleration, and what each model implies for capability and blowback.

Superintelligence: Abundance or Drift
The fork in the road: abundant capacity does not guarantee coherent direction, and drift can be more dangerous than scarcity.

Beyond the Black Box: What Kind of Intelligence Are We Building?
Not just performance: what it means to build systems that approximate judgment, intention, and explanation.

Hurtling Down the Tracks: The Express Trains of Convergence
When multiple exponential curves collide, society experiences the impact as a single shock, not a gradual transition.

Strange Loops in AI
A practical account of feedback traps: how interacting with a machine that mirrors you can still change you.

The New Intimacy: How AI Is Rewiring Our Minds
The private frontier: attention, attachment, and dependence as the interface shifts from screens to conversational agents.

You may also like...