Tagged: AI Governance

Who Gets to Train the AI That Will Rule Us

Artificial intelligence is not dangerous because it talks. It is dangerous because a tiny group of institutions now trains the black box systems that will sit between citizens and almost every important decision. This piece argues for a hard rule: if a model is used as public infrastructure, its training process cannot remain a corporate secret.

The Human Side Of Using A Very Large Machine

A language model is not a friend or a god. It is a fast, obedient engine for words that already lets one person do the work of a team. This piece sets out what the machine can really do now, where it fails, and how to use it as a partner without giving up human judgement or responsibility.

When Prediction Becomes Control: The Politics of Scaled AI

Artificial intelligence does not expand human knowledge; it expands the precision with which that knowledge can be exploited. As models scale, they become instruments of prediction and optimisation that outstrip the capabilities of individuals and institutions. The central danger is not rogue AI but concentrated intelligence: a small elite or powerful state wielding tools of superior foresight, modelling and influence. Unless capability is distributed, society risks becoming captive to those who control the lens.