Sam Altman and the Shape of the Future


Sam Altman speaks less like a computer scientist than a strategist. He has no doctorate in artificial intelligence, no technical pedigree of the kind that fills the ranks at OpenAI. Yet he has come to occupy a singular position in the field. As chief executive of OpenAI, he is both salesman and forecaster, a man who divides his time between the design of software models, the search for billions in financing, and the cultivation of relationships with presidents and regulators. His importance now lies not in the code he writes, but in the future he describes, and the resources he can summon to make it real.


Altman’s projection is simple in outline and radical in consequence. He believes that “agents”—autonomous systems built on top of language models—will become part of the workforce this year. They will write emails, reconcile accounts, draft contracts, and handle the small but essential labor that fills office days. The near-term impact, he says, is not mass unemployment but a change in the “output of companies”: productivity rising not from a single breakthrough but from millions of delegated tasks carried out in silence.

In his telling, the dislocation will come gradually, but it will come. He has argued for years that new institutions will be required to absorb the shock, reviving the case for universal basic income as a form of insurance against technological substitution. He has described a world of “abundance,” where the price of goods falls because intelligent systems remove inefficiencies at scale. His critics hear the utopian register of the Silicon Valley futurist; his supporters hear a plausible adjustment to a labor market already stretched.

The infrastructure for such abundance is costly. Altman now speaks of “trillions of dollars” in capital expenditure to build data centers large enough to train the next generation of models. He has promoted schemes—still undefined in detail—to finance these projects in a manner more akin to sovereign infrastructure than to venture capital. The sums are compared to wars, to moon landings, to the construction of national power grids. Whether OpenAI, even backed by Microsoft and prospective investors such as SoftBank, can command such resources is uncertain. But the aspiration itself has become part of the company’s identity: it is no longer a laboratory, but a would-be utility.

If the jobs story is about what artificial intelligence does, the cultural story is about what it is to people. Here Altman is cautious. He has acknowledged that a small but significant number of users treat ChatGPT as more than a tool, developing attachments that resemble counseling or companionship. He has said that “well under one percent” of users form unhealthy relationships with the system, but that this still amounts to millions when counted at global scale. He has drawn a line at products that would exploit intimacy, rejecting the idea that OpenAI will build sexualized companion bots.

This boundary has been difficult to maintain. In 2024, OpenAI released GPT-4o with a warm, conversational voice that some users described in terms borrowed from cinema. When the company withdrew the model in favor of the more bounded GPT-5, users demanded its return. They said the newer system was technically superior but lacked presence. Altman relented and restored the older model for paying subscribers. The episode revealed an attachment more powerful than expected: people were not only using the system, they were missing it.

The question for Altman is whether such attachments can be managed without becoming a business model in themselves. He has said OpenAI consults with mental health experts and designs nudges to discourage dependency. Yet the lesson of recent months is clear: tone and demeanor are not secondary to intelligence; they are part of what users experience as intelligence. To engineer that balance at scale may prove harder than improving accuracy or reducing hallucinations.

Altman has not hidden his own ambivalence. In a recent podcast he recounted giving GPT-5 a question from his inbox that he himself could not answer. The model’s reply, he said, was perfect, and left him “feeling useless relative to the AI.” The anecdote served two purposes: it sold the model, and it revealed the discomfort of a man watching his own relevance measured against his creation.

He insists that deployment will be gradual, that the systems will enter daily life in increments, and that regulation should be built in parallel with capability. He speaks of a “gentle singularity,” a future in which intelligence accumulates without rupture. The phrase is designed to reassure. Yet the numbers he cites—trillions in capital, billions of users, agents in every workplace—convey disruption of a magnitude rarely managed smoothly.

Sam Altman has long been accused of overstating what his products can do. He has also, repeatedly, delivered systems that millions now use. His forecasts will be tested not in conference halls but in offices, homes, and the quiet exchanges where people already treat his models as confidants. The paradox is that the future he describes will depend on human preference as much as machine capability: whether workers accept email drafted by an agent, whether companionship from a bot feels like help or harm, whether society agrees to fund abundance through subsidies or redistributes it through new institutions.

For now, the verdict is partial. Users want intelligence, but they also want warmth. They will adopt the tool, but only if they can live with its voice. Altman has promised not to build the kinds of systems that prey on attachment. His task is to build one that respects it. That may be the most difficult engineering problem he has yet faced.

Leave a Reply

Your email address will not be published. Required fields are marked *