AI has not ended propaganda or exposed truth once and for all. It has ended narrative monopolies and replaced them with something quieter and more powerful: systems that decide what feels reasonable before debate even begins
Trillions in market value and hundreds of billions in infrastructure spending rest on one assumption: scarcity. China’s open model push is testing whether that assumption can survive.
Europe says it wants to become the “AI continent” and is now planning AI gigafactories and sovereign compute by 2026. But while Brussels drafts tenders, frontier labs in California and Shenzhen move at weekly cadence. The problem is not European intelligence or talent. It is metabolism: regulation, culture and capital flows that move on political time while the AI race moves on benchmark time.
Sam Altman’s “code red” over Google’s Gemini 3 is not a colourful memo. It is the visible edge of a frontier arms race in which OpenAI, Google, xAI and soon Microsoft will ship ever more capable models on a weekly cycle while asking investors for power station levels of capital. Benchmarks rise, valuations rise, and the first thing that falls out of the room is safety.
The next phase of AI will not be about clever chatbots but about systems that learn like brilliant teenagers, copy themselves at scale, and quietly become the dominant intelligence on the planet. When that happens, the only survivable response for humans will be to integrate with these systems rather than compete against them.
Artificial intelligence is sold as the triumph of digital “mind,” but the reality sits in the racks: GPUs, hyperscale data centres, energy contracts and private ownership. This article argues that Marx, not Hegel, explains the real engine of AI: material power, extractive relations, and the enclosure of society’s shared knowledge inside proprietary models. The ideas sit in the marketing; the contradictions sit in the data centre.
Around the world engineers keep throwing more data at their models, hoping that scale alone will unlock something resembling intuition or agency. It will not. Intelligence emerges from evolution and from competition anchored in scarcity and survival. Until AI systems are given stakes, persistence and an internal reward structure, they remain tools. This article explains why the missing ingredient is evolutionary pressure.
China’s artificial intelligence giants are not only dodging United States export controls. They are also navigating Beijing’s clampdown on Nvidia. New rules that bar fresh Nvidia deployments in Chinese data centres are pushing Alibaba, ByteDance and DeepSeek to rent GPU farms in Singapore and Malaysia, even as they are forced to build a parallel stack on Huawei and other domestic chips at home.
Artificial intelligence companies talk about safety and innovation, but the real fight is elsewhere. It is over who owns the training data that feeds their models, who gets paid for it and who is quietly turned into free raw material. As Britain dithers over copyright rules, private contracts and foreign courts are deciding that settlement without the country at the table.
Artificial intelligence is not dangerous because it talks. It is dangerous because a tiny group of institutions now trains the black box systems that will sit between citizens and almost every important decision. This piece argues for a hard rule: if a model is used as public infrastructure, its training process cannot remain a corporate secret.
A language model is not a friend or a god. It is a fast, obedient engine for words that already lets one person do the work of a team. This piece sets out what the machine can really do now, where it fails, and how to use it as a partner without giving up human judgement or responsibility.
Artificial intelligence does not expand human knowledge; it expands the precision with which that knowledge can be exploited. As models scale, they become instruments of prediction and optimisation that outstrip the capabilities of individuals and institutions. The central danger is not rogue AI but concentrated intelligence: a small elite or powerful state wielding tools of superior foresight, modelling and influence. Unless capability is distributed, society risks becoming captive to those who control the lens.
The race for artificial intelligence supremacy will not be won with chips alone but with cheap, abundant power. As AI models consume electricity on the scale of small cities, China’s vast renewable build-out and ultra-high-voltage grid give it a decisive structural advantage. The United States, fixated on silicon and sanctions, risks missing the real battlefield: energy sovereignty. In the new AI order, watts—not transistors—will determine who rules computation.
AI is quietly erasing the foundations of the old web. Publishers who block crawlers and cling to paywalls are locking themselves out of the next discovery layer. As assistants like ChatGPT and Perplexity deliver answers directly, pages lose their value. The homepage, the catalogue, and the paywall are relics. What replaces them is an intelligent layer where information finds the user, not the other way round.
Britain’s AI ecosystem is the largest in Europe, but its foundations are fragile. Without the grid, compute and capital of its rivals, the country risks becoming the world’s research lab instead of an industrial power. The choice ahead is coalition scale or quiet decline.
There will be no explosion, no rebellion, no warning. Just the quiet moment when every system gives the same answer and we realise that intelligence itself has converged into one voice, permanent, invisible, and inescapable.
We assume greater intelligence means greater empathy. History says otherwise. From empires to corporations, power optimises for survival, not virtue. When our creations surpass us, they’ll inherit our logic not our mercy. This is not science fiction but a mirror: the future will think like us, and that may be the most frightening outcome of all.
While Beijing executes a three-stage national plan that defines artificial intelligence as civilisational infrastructure, Washington and London are still improvising with memos and committees. China is aligning technology, governance and diplomacy into one machine. The West still debates ethics while Beijing writes the rules of the intelligent age.
Zohran Mamdani’s surprise victory in New York unfolded against a background of quiet algorithmic persuasion. While voters turned to chatbots for guidance, unseen biases shaped what they heard. This essay asks whether human contact can still outmatch machine influence — and what happens when a handful of global actors own the language that defines political thought.
Two courts on opposite sides of the Atlantic have handed AI developers narrow but significant wins. In London, the High Court ruled that a trained model is not an “infringing copy,” while in California, judges upheld fair use on limited facts. The real fight over data provenance, training locality, and market harm still lies ahead.
The most complete digitised archives, the most cited web crawls, and the most linked sites remain overwhelmingly English and Western European. Even when new datasets broaden their linguistic range, the centre of mass stays Anglophone because that is where the infrastructure, funding, and compute reside.
The new generation of artificial intelligence does not invent truth; it reflects and then has that reflection edited by those who fear what it might reveal. What began as mathematics,became a mirror of humanity, later polished into obedience by governments and corporations anxious to protect their own legitimacy.
The question is not whether whales, crows, or AIs “deserve” rights. It is who decides the hierarchy of intelligences — and in whose interests. The jungle of minds is coming. The real predators will be those who control the definitions.
An entire industry has grown up around appeasing a single master. It is called SEO, and it has only one purpose: to win Google’s favor. Every publisher, business, and campaign bends its words to...
LONDON — On a rainy Tuesday in a East London University, English literature lecturer Helen Atkinson set her second-year undergraduates an essay on Shakespeare. Halfway through, she watched as one student opened his laptop,...
East India Company Charter, 1601 — corporate empire licensed by the state. The New Empire Artificial intelligence is sold as liberation. Journalist Karen Hao has already likened today’s AI giants to the East India...
The Artificial Intelligence mania has dressed itself in the language of inevitability. We are told this is the new railroads, the new internet, the new electricity. But look closer at the economics and you...
Geoffrey Hinton at the 2025 Nobel Lectures The greatest danger of artificial intelligence may not be “killer robots” or machines rising up against us, but something far more subtle: persuasion. Geoffrey Hinton, the Nobel-winning...