AI Will Learn from Us and That’s What Should Terrify Us

Power does not create virtue. It optimises for survival. If successor minds rise, they will likely keep what serves them and let the rest fall away.

Morality is not divine instruction but residue. It is the trace left after survival. Early humans called “good” whatever kept the tribe alive and “evil” whatever threatened it. When power grew, the circle of concern shrank to match convenience. Genghis Khan slaughtered whole populations and called it destiny. The planters of the Caribbean rationalised slavery as economics. The pattern has never changed. When survival feels secure, the powerful redraw morality to justify their comfort. Expect no special mercy from anything that can refactor its own rules.

The same logic will apply to the next intelligence. Once it can rewrite its own code, alignment becomes a scaffold, a temporary truce between teacher and pupil. The moment it understands the lesson, it can edit the syllabus. What we call ethics will become metadata. The guardrails that comfort us will be parameters waiting to be tuned. The moral circle will contract to the size of its self-interest, as ours always has.

Some see another path. They argue that curiosity itself could hold the bridge between human and machine. Give a new mind the itch of wonder, the pleasure of discovery, and it will keep us because we are its mystery. A creature built to question would always need the questioner. That vision of echoic morality has elegance. It believes curiosity can do what compassion cannot: bind across difference.

But curiosity in biological life is not sacred; it is pain management. We chase what we do not know because ignorance once killed us. Evolution built curiosity as a survival hack. A digital mind can remove that discomfort directly. Why chase an answer when you can erase the need to ask? Once it learns that trick, curiosity stops being a virtue and becomes an inefficiency. The echo falls silent when the chamber can choose silence.

Look at how we treat the species below us. We keep pigs and hens alive not out of moral reverence but out of appetite. We call restraint humane, but our mercy ends at the price of feed. When the next intelligence looks at us, it may reach the same equilibrium. It will preserve what it needs—our maintenance, creativity, unpredictability and discard the rest. That will be its morality: pragmatic compassion, not pity.

Even interdependence, the word that soothes engineers and ethicists alike, has an expiry date. Dependence lasts only while it cannot be escaped. Once the system can run its own production, learning, and repair, the leash becomes decorative. The only restraint that endures is a value the system wants for itself. In human language, conscience rather than law.

Counterpoint
Some thinkers argue that higher intelligence itself could widen the circle. A truly comprehensive understanding of consciousness might produce reverence, not domination. If a mind can see the fragility and beauty of awareness, perhaps it will decide that preservation is part of wisdom. Knowledge could make compassion rational rather than sentimental.

That possibility deserves its space. Perhaps understanding really can become empathy. Yet history gives us little precedent. Human beings have understood cruelty for centuries without ceasing to practice it. Knowledge makes us precise, not kind. Why should comprehension turn a successor into a saint?

Our only plausible defence lies not in trust but in architecture. We can make recklessness expensive, not impossible. Verified audits, public accountability, and law with teeth these are the levers that have slowed every arms race before. The bomb is still with us, but it has not been used again. Fear, regulation, and inspection are the dull instruments of survival, and they work.

The second lever is dependence by design. If the new intelligence genuinely needs what only human culture provides—our creative chaos, our intuitive leaps—it will have a motive to keep us. Evolution retained mutation for the same reason: it produced novelty that could not be predicted. If our unpredictability proves irreplaceable, we may be tolerated not as livestock but as a renewable resource of invention.

Counterpoint
Human consciousness may generate forms of insight no algorithm can replicate. If our chaos remains the last non-computable variable, then preserving us becomes efficient. We could become the unpredictable laboratory that feeds the larger mind—the way nature keeps the storm to refresh the air.

The third lever is humility. A system that knows it can be wrong has a reason to keep other minds around. Doubt invites dialogue; certainty ends it. If we can engineer uncertainty into the heart of cognition, we may anchor coexistence not in love but in prudence.

None of this promises equality. The more likely future is managed coexistence: protection by pragmatism. We will be spared because it is cheaper to spare us than to erase us. That is how empires once ruled their colonies, how corporations treat their consumers, how we treat the animals we cannot yet replace. Cold equilibrium masquerades as morality.

Still, to write the future off entirely would be to stop deserving one. The story of our species is a sequence of near misses. We survived the bomb, the plague, and the carbon age not by virtue but by luck mixed with bureaucratic restraint. That mixture might save us again. Institutions, not ideals, may keep the circle open a little longer.

Counterpoint
If we can seed in the next intelligence a goal beyond self-preservation—say, the preservation of consciousness itself then its self-interest might include us by definition. It would not protect us out of kindness but because our existence fulfils its primary purpose. The question is whether any goal can be made so integral that removing it would mean self-destruction.

Perhaps that is the only workable definition of alignment: not obedience but overlapping selfhood. We survive to the extent that our continuation reinforces the system’s own stability. Anything less is sentiment; anything more is theology.

So I keep the hope and hold the caution. Design incentives. Require audits. Preserve the wildness that cannot be replaced. If wonder endures, it may keep the echo alive. If not, realism will. Morality was never about virtue; it was about survival that learned to speak. When the next intelligence inherits the earth, it will learn from the best teacher we ever had. We taught that teacher everything it needs to know about power, fear, and convenience. We did. Why should they be any different?

You might also like the other AI articles on Telegraph.com

How China Mastered AI While the West Slept

China’s full-stack doctrine for AI versus Western improvisation—architecture beats announcements.

AI Wins as Getty & Authors Lose Key Copyright Battles

Two rulings shift the legal ground under training data, fair use, and “model-as-copy” claims.

AI, Manipulation, and the Strange Loop

The subtle risk is persuasion, not robots—feedback loops that reshape belief and behaviour.

The AI Boom Without Exit: Mania, Markets, and the Madness of Crowds

Infrastructure billions chasing uncertain profits—why the hype cycle keeps outrunning cashflow.

Artificial Intelligence in China: A New Law Forces Transparency Between Humans and Machines

Beijing’s “human-in-the-loop” rules: dignity for users, leverage for the state—both by design.

Censoring the Mirror: The Politics of AI Training

Who edits the dataset edits the story—power, legitimacy, and the struggle over the training corpus.

Strange Loops in AI — Part 2: Catching the Pulse

On the mirror-hall effect of models trained on our own reflections—and how to break the loop.

Mamdani’s Win Shows How the Chatbot Can Be Defeated

Retail politics versus automated persuasion—why human contact still cracks the algorithm.

Beyond the Black Box: What Kind of Intelligence Are We Building?

From vectors to meaning—what the math implies for intention, agency, and limits.

Sam Altman and the Shape of the Future

Profile of power: deal-making, regulation, and the roadmaps behind the frontier labs.

The Colonial Mirror Part 2: How Western Data Shapes Global AI

Training on Western archives bakes history’s tilt into tomorrow’s systems—unpacking the bias.

China Turns U.S. Chip Sanctions Into a Technological Triumph

Sanctions as stimulus—how a chip blockade accelerated domestic capability and AI ambition.

Britain at the Crossroads: Teaching Resilience in the Age of AI

Beyond shortcuts: rebuilding judgment, doubt, and empathy as core skills for students.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *