The fight over AI is not about intelligence. It is about power

Artificial intelligence is being sold as a leap in knowledge and productivity. In reality, it is becoming a machine for concentrating capital, infrastructure, and decision making power in the hands of a tiny number of firms rich enough to command the chips, the data centres, the electricity, the lobbying power, and the legal muscle to shape the next economy around themselves. As investment, energy, data, and dependence gather around them, the question is no longer what these systems can do. It is how much of the world’s future a handful of companies will be allowed to own, organise, and charge for.

Artificial intelligence is no longer merely a new technology. It is becoming a new architecture of rule. Hundreds of billions of dollars are being pulled into data centres, chips, power contracts, cooling systems, grid upgrades, and legal claims over the raw material of culture itself. Four American companies alone are expected to spend about $650 billion on AI infrastructure in 2026, while the wider total may approach $811 billion when Oracle and CoreWeave are included. This is not just the rollout of useful software. It is the construction of a new commanding height of economic life.

What is at stake is not simply whether machines can imitate thought. It is whether five or six corporations will come to mediate the terms on which whole societies work, search, write, learn, design, code, communicate, and eventually govern themselves. As dependence deepens, so does leverage. The companies that control the models, the compute, the cloud, the semiconductors, and the physical plant of artificial intelligence will not merely sell products. They will sit astride the new toll roads of social and economic life. Markets are already pricing that future in. The S&P 500 crossed 7,000 for the first time in January on AI optimism, while Larry Fink has warned that the gains from AI may accrue mainly to the firms and asset holders already positioned to own the infrastructure of the boom. The fight over AI, then, is not really about intelligence. It is about who gets to own the future and invoice everyone else for access to it.

A simple framework

The present AI order is consolidating power through five linked channels: myth, capital concentration, infrastructure concentration, cost socialisation, and dependency creation. This framework matters because it cuts through the fog. The industry wants the public to focus on wonder. The real story lies in ownership, not enchantment.

The first task is to discard the language trap. The public argument is usually framed as a dispute about intelligence. Are the models reasoning. Are they becoming general. Are they approaching human level cognition. But the more consequential question is political. Who benefits when society is persuaded to believe that intelligence itself is being manufactured at scale and that only a tiny group of firms can safely manage the result. Once that question is asked, the whole terrain changes. Artificial intelligence stops looking like a neutral scientific frontier and starts looking like a project of concentrated power.

The myth is part of the machinery

One of the most important truths about the AI boom is that the mythology is not accidental. It is functional. The rhetoric of AGI, existential danger, machine gods, and civilisational destiny does political work. It mobilises capital. It softens scrutiny. It tells the public that what is being built is too important, too dangerous, or too historically decisive to be governed in ordinary democratic ways.

That is why the same firms and executives can talk about AI in several contradictory voices without seeming embarrassed by the contradiction. To consumers, it is the perfect assistant. To lawmakers, it is the cure for cancer, climate change, and poverty. To investors, it is the foundation of future rents and outsized profits. To the anxious public, it is a force that could go catastrophically wrong unless it remains under the guidance of those already building it. Catastrophe and utopia are used together. The message is simple: this technology may destroy everything, therefore we must be trusted with more capital, more data, more energy, and fewer restraints.

That is not scientific clarity. It is strategic ambiguity. It converts one group’s material project into everyone else’s moral obligation. It is how empires have always spoken. They promise modernity, warn of rival powers, and demand extraordinary latitude in the name of protecting civilisation.

Capital is being sucked into a narrow corridor

The financial story comes first because it explains so much else. AI is not merely attracting investment. It is pulling investment into an exceptionally narrow corridor of hyperscalers, chipmakers, utilities, data centre developers, and the financial structures that support them. When four firms can be expected to spend about $650 billion in one year and the wider total approaches $811 billion, the point is no longer that AI is fashionable. The point is that capital itself is being reorganised around the expectation that whoever controls compute and distribution will control the next layer of the economy.

Capital is not neutral. Money directed at this scale creates the world it expects to inhabit. It deepens the position of the firms that already dominate cloud, chips, and software distribution. It raises the barriers to entry for almost everyone else. It tells investors, boards, and political systems that the future belongs to the companies rich enough to build the machine first. The boom therefore does not merely reflect belief in AI. It manufactures a structure in which those already controlling the most important bottlenecks become even harder to dislodge.

The stock market has responded accordingly. When the S&P 500 crossed 7,000 for the first time on AI optimism, that was not a side show. It was evidence that financial markets are already capitalising the future power of firms claiming command over the stack. This does not prove a bubble in the narrow technical sense. It does show that valuations, sentiment, and strategic capital allocation are being shaped by projected AI dominance long before society has settled the questions of governance, labour, and distribution.

Infrastructure concentration

The second reality is physical. AI is often discussed as if it were airy and immaterial, but it lives in data centres, substations, turbines, transformers, transmission lines, chip fabs, cooling systems, and water extraction. Strip away the language of intelligence and what remains is a colossal build out of industrial plant.

The numbers are already severe. Data centres could account for 44 percent of United States electricity load growth from 2023 to 2028. That means AI is no longer just a software story. It is a fight over first claim on the grid. It is a fight over who gets power, who gets queue priority, who gets the permits, who gets the water, and who gets stuck paying for the upgrades.

That advantage is not evenly distributed. The firms best positioned in this race are the ones that already have giant balance sheets, cloud businesses, and political relationships. Smaller firms may innovate. They cannot match the hyperscalers in land acquisition, energy contracting, chip procurement, or lobbying reach. The race is therefore not simply selecting the best technology. It is rewarding whoever can command the bottlenecks.

Cost socialisation

The benefits of the boom are being capitalised upward while many of the costs are being socialised outward. Utility systems will be asked to expand to support private AI empires. Customer bills may rise. Local communities will absorb water stress, land use change, construction burden, and in some cases direct pollution. Ordinary people are not simply buying a useful product. They are being drafted into financing and hosting the infrastructure on which somebody else’s market power depends.

This is where the abstract language of innovation begins to look dishonest. If the technology were simply a clever tool, the public would not be fighting it in planning hearings, utility disputes, and environmental justice campaigns. Yet those fights are growing. In Britain, activists have already begun coordinated protests against AI data centre expansion over its climate and community effects. The backlash is not irrational technophobia. It is what happens when a software narrative arrives in the physical world as a land use conflict.

Memphis and Boxtown

The most vivid illustration is Memphis. Elon Musk’s xAI used gas turbines to power its Colossus facility, provoking a civil rights and environmental backlash in communities already burdened by industrial pollution. The issue is not merely that a company needed more energy. It is where the burden landed and how it landed. In and around Boxtown and southwest Memphis, residents found themselves facing an AI build out in communities that were already carrying the legacy of environmental racism. That is what concentrated power looks like on the ground. A company pursuing speed and scale can place the health burden of its infrastructure onto people with the least leverage to resist it.

The larger significance is political. AI is not just disrupting abstract markets. It is redistributing exposure. If one class of people receives the productivity gains while another receives the fumes, the noise, the water competition, and the bill increases, then the issue is no longer whether the technology is impressive. The issue is whether the social contract is being rewritten in favour of firms that privatise upside and disperse cost.

Extraction masked as innovation

The current AI model also depends on another kind of extraction: large scale ingestion of text, images, sound, and code produced by others. That is why so many of the defining disputes of the boom are copyright suits rather than scientific debates. The New York Times sued OpenAI and Microsoft. Encyclopaedia Britannica has now sued OpenAI, alleging that nearly 100,000 of its articles were copied to train GPT models. Merriam Webster has joined the same line of attack. These cases matter because they expose the actual substrate of the industry. What is called innovation often begins as large scale capture of other people’s work.

The pattern is familiar. First ingest the archive of human culture. Then call the ingestion transformative. Then present the resulting system as inevitable. The issue is not whether every claim in every lawsuit will prevail. The issue is that the foundation of the industry lies less in spontaneous synthetic genius than in the capture and reprocessing of cultural material at industrial scale.

Hidden labour, visible hierarchy

The same pattern extends to labour. The mythology says the machine teaches itself. The political economy says otherwise. TIME’s investigation into OpenAI’s safety pipeline showed Kenyan workers employed by contractor Sama earning less than $2 an hour to label graphic material involving violence, sexual abuse, and self harm so that ChatGPT could be made safer. The clean interface seen by the user sat on top of hidden trauma absorbed by workers at the edge of the supply chain.

This is not a footnote. It is one of the clearest windows into what AI really is. The glamorous system at the top depends on badly paid, often psychologically punishing labour below. It is a hierarchy disguised as automation.

The ladder is being broken

The job story is also more serious than the usual slogans suggest. The important question is not only whether jobs disappear. It is what kind of labour market is left behind. Entry level and mid tier work are already under pressure in white collar sectors. What disappears first are often the rungs people need in order to progress. What remains at the top are a smaller number of highly paid orchestrators, owners, and experts. What expands below can be a more atomised, contingent, and degraded kind of work.

That is why the image of a smooth transition to higher productivity is misleading. Some firms really are becoming more productive with fewer people. Some executives are absolutely using AI as a reason or a pretext to hire less. But the burden does not fall evenly. A business owner may feel newly liberated from spreadsheets and routine tasks. A worker laid off from a skilled role may end up competing for unstable annotation work that trains the very systems that displaced them. The result is not just job loss. It is the breaking of the career ladder itself.

Research capture and the monopoly on authority

There is a further concentration effect that may prove decisive. As capital and compute accumulate inside a small number of firms, public interest research is hollowed out. Roughly 70 percent of people with AI PhDs now go into private industry, up from about 20 percent two decades ago. That is not just a talent story. It is a governance story. The people with the most compute increasingly also control the research agenda, the experiments that get run, the risks that get emphasised, and the limits that get ignored.

This matters because it allows the industry to monopolise knowledge production. The public is told that if it dislikes what is happening, it is because it does not understand the technology. Policymakers are told that frontier AI is too technically complex to be governed from outside the labs. Researchers who challenge the strategic direction of large language models can find themselves marginalised or removed. That gives the industry an extraordinary advantage. It gets to build the system and explain the system at the same time.

The strongest counterargument

The strongest defence of the boom is not foolish. It says that major industrial transformations have always required huge capital outlays before the gains became obvious and broadly shared. Railways, electrification, aviation, semiconductors, and the internet all required long periods of heavy spending, speculative investment, and infrastructure concentration. On this view, anti data centre politics can become self defeating in a geopolitical race. If one society slows itself down while another builds, the slower society may end up renting intelligence, compute, and strategic capability from its rivals.

This argument deserves to be taken seriously. It is especially strong where AI has real utility and where global competition is not imaginary. But it does not answer the central charge. The question is not whether great technological changes require infrastructure. They do. The question is whether this particular transformation is being organised in a way that centralises ownership, externalises burdens, and narrows the public’s ability to contest the shape of the system being built around it. On the evidence so far, the answer is yes.

What should be built instead

The choice is not between total surrender to the AI empires and total rejection of machine learning. That is another false dichotomy. The current model should be criticised precisely because useful alternatives exist. Specialised systems such as AlphaFold have produced genuine scientific gains without requiring the same kind of all consuming social and infrastructural empire. Narrower tools in climate modelling, logistics, and medicine can be built with less data, less energy, and a clearer public purpose.

The point is not to abolish technology. It is to strip the imperial logic out of it. The present system is predicated on taking more value than it gives back, on extracting labour and data without a fair exchange, on securing public tolerance for private build out, and on persuading everyone else that such concentration is both inevitable and benevolent. That is what needs to be broken.

What this really is

The fight over AI is therefore not a seminar on consciousness. It is a struggle over capital allocation, infrastructure command, legal privilege, labour discipline, and public dependency. It is about who gets to own the models, who gets to secure the chips, who gets first claim on the grid, who gets to ingest the archive of human culture, and who gets to charge everyone else for access to the resulting machine.

The industry wants the public to see genius. The public should see structure. It wants us to think in terms of dazzling outputs. We should think in terms of ownership. It wants to ask whether the machines are intelligent. The more urgent question is whether the societies adopting them are surrendering too much power to those who build them.

That is why the argument over AI is larger than a product cycle and deeper than a hype wave. It is the opening battle over the economic constitution of the next era. And unless that battle is recognised for what it is, a handful of firms will not merely profit from the future. They will own the roads into it.

You might also like to read on Telegraph.com

Power, infrastructure, and the AI buildout

Labour, identity, and the social shock

Economics, capital, and business models

Control, governance, and the legal vacuum

Agents, automation, and the next technical phase

You may also like...