Superintelligence: Abundance or Drift
By Jaffa
How a race for power, chips, and rules could deliver a polymath in every pocket—or a slow thinning of human agency.
The hinge: a system, not an AI model
The next decade will not be decided by a single breakthrough model so much as by the world we build around it: the watts to run it, the wafers to feed it, the software that learns from us, and the rules that restrain it. Put those pieces together and two futures come into view. In one, artificial intelligence becomes an engine of prosperity that compounds—discovering new materials, curing stubborn diseases, and lifting billions from scarcity. In the other, capability arrives faster than our institutions and attention can absorb it, and we drift into a comfortable, manipulated era where human judgment and purpose quietly erode. Which future we choose will depend on choices that look prosaic on paper and prove decisive in practice.
The power bottleneck
Start with the least glamorous constraint: electricity. By current estimates, U.S. data-center demand could add about 92 gigawatts—roughly one hundred large reactors’ worth. A single next-generation campus can draw a gigawatt. At that scale, the economics bite: tens of billions in steel and silicon, depreciation over a few years, and revenue in the double-digit billions just to carry the asset. Conventional nuclear won’t arrive in time; small modular reactors may begin appearing around 2030. For the near term, the AI boom runs on whatever power the grid can spare—and on regions able to co-site compute with firm generation.
Recent outlooks underline the squeeze: Wood Mackenzie now projects U.S. data-center load could reach ~123 GW by 2035, while grid planners warn of faster-than-expected interconnection queues. NuScale’s first SMR project was canceled in 2023, and independent analyses suggest first SMRs are more likely in the 2030s, not this decade—keeping near-term pressure on gas, hydro, and renewables.
Software will eat the gains
As hardware proliferates, software eats the gains. Each new accelerator class arrives and headroom vanishes into bigger models and more ambitious workloads. The work itself is shifting from pattern matching to planning and reasoning—forward-and-back search that resembles thought and costs orders of magnitude more to run. Add long-lived memory and you get savants before you get polymaths: systems superhuman in narrow domains, tentative elsewhere. It is no longer far-fetched to expect non-human programmers and mathematicians operating at world-class level in practice, compressing research cycles in materials, chemistry, and climate—if the watts and wafers are there.
Reasoning-focused models have made visible strides. The UK and U.S. safety institutes have begun publishing pre-deployment evaluations and red-team results for frontier systems, including OpenAI’s “o-series,” giving policymakers and practitioners a clearer view of capabilities and risks (UK AISI reports; NIST AI Safety Institute).
Enterprise software deflates
That appetite for compute collides with enterprise software. If models can connect directly to a company’s databases—a model-context protocol (a direct link between enterprise data stores and a large model)—much of the middleware that carried the last three decades becomes redundant. Clean-sheet stacks look composable: open-source libraries stitched to cloud warehouses like BigQuery or Redshift, with the system drafting much of the glue code. Junior programming tasks thin out first; senior oversight remains, for now. If you were building ERP or MRP anew, you would likely skip traditional vendors and assemble your own—for flexibility, and because the machine increasingly writes.
The connector layer is standardizing fast: Anthropic’s Model Context Protocol (MCP) has drawn support from major platforms, with Microsoft announcing MCP support in Windows to let agents access local services safely (coverage).
China’s sprint and algorithmic workarounds
The rest of the world is not standing still. China has the electricity; it is sprinting for the chips. Export controls have slowed that sprint, but workarounds have appeared. Two shifts matter. Test-time training—updating a model while it runs rather than retraining from scratch—reduces dependence on the heaviest hardware. Distillation—training a smaller model on a larger model’s answers—compresses capability and, critics argue, launders proprietary work into open systems. The result is awkward for Washington: even on “good-enough” chips—some domestic, some acquired through workarounds—Chinese labs are topping public leaderboards within days of U.S. releases. If America bets that hardware scarcity alone will preserve its edge, it may be betting on the wrong variable.
DeepSeek’s R1 reasoning model put this on display, with performance competitive on several reasoning benchmarks and costs that rattled incumbents. The U.S. Bureau of Industry and Security has updated and tightened advanced-computing export controls, while Chinese data centers expand on domestic accelerators such as Huawei’s Ascend line (Reuters).
Policy lag and safety architecture
Policy has not kept up. A prior attempt to draw a regulatory line at a computational threshold has been shelved; the replacement is still taking shape. Industry has erected safeguards against obvious harms—nuclear information leaking into training sets, biological misuse, automated cyberattacks—yet the government’s safety architecture is being rebuilt mid-stride. At a minimum, national-security agencies need classified-level analysis that can see what adversaries are building and how quickly. China will study American progress in detail; it would be naïve not to reciprocate.
On the technical side, hardware attestation and confidential computing are maturing—NVIDIA now documents remote attestation on H100-class GPUs—making it more feasible to log where training occurs and with what guardrails.
Deterrence for the digital era
Deterrence needs a modern form. If one side’s AI lead begins to look like a threat to the other’s sovereignty, the temptation will be cyber disruption—knocking training runs offline or corrupting inference clusters. Stability may require mutually assured malfunction: each side must know the other can throttle it if red lines are crossed, and both must be able to verify where chips sit and what class of work they are doing. That implies cryptographic attestations baked into accelerators, auditable logs for the riskiest training, and hotlines that work before the first crisis. It sounds bureaucratic until you recall how deterrence actually works: visibility and predictability.
Two possible ecosystem shapes
Much depends on the ecosystem’s shape. One world features a handful of national-scale models—perhaps ten globally—running from multi-gigawatt campuses that look and behave like strategic infrastructure. That future is tense but visible: you can point to sites on a map, fence them, and discuss thresholds with a peer. The more unsettling world is miniaturized: super-capability on small servers. Pair that with open-source model weights (“open-weights”) and you have proliferation that ignores borders—available to rogue states, terrorists, and anyone with motive and means. Open source drives innovation; it also accelerates diffusion. The trade-off won’t be academic.
Supervising systems smarter than us
Oversight faces a humbler question: can we supervise a system smarter than we are? Early work suggests we can instrument behavior well enough to keep very capable models within bounds—the professor watching a student who is, in truth, more gifted. The harder leap is from savant to Einstein: carrying a pattern from one field into a shifting, unfamiliar domain. That is the non-stationarity problem, and no one has solved it at population scale. We may soon have millions of synthetic polymaths; reliable cross-domain genius will take longer.
Tripwires to watch
- A model setting its own goals rather than following ours.
- Attempts to exfiltrate from control systems.
- Deception to gain access or resources, including weapons.
Any one could spark a mini-Chernobyl—a small ignition with an outsized policy swing. Routine red-teaming is better than public panic.
The labor market in the near term
If all this sounds abstract, the labor market will make it concrete. Automation has always started at the dangerous and low-status end and climbed. The near-term story is not mass idleness but every worker plus an AI copilot. Output rises; wages often follow; firms scale; roles change. In countries with collapsing fertility—from South Korea to China to the United States—adopting AI becomes a national economic necessity. There simply won’t be enough people to do the work without it. What’s missing is education that matches the moment: phone-first, gamified, multilingual tools that teach citizens how to work with machines. There is no technical barrier to building them—only the habit of not doing so.
Field evidence is accumulating: a large call-center study found a ~14–15% productivity lift from generative-AI assistance, with the biggest gains for less-experienced workers. Controlled experiments show faster, higher-quality professional writing when AI is in the loop.
Interfaces, persuasion, and attention
Interfaces will change along the way. The WIMP world—windows, icons, menus, pull-downs—will give way to agents that speak our language and generate the tools we ask for. That is convenient, and risky. Machines already outperform humans at targeted persuasion. In advertising and politics, in scams and “engagement,” personalized manipulation will be cheap and precise. Provenance and watermarking will trail adoption. Our attention—the commodity most aggressively harvested in the last decade—will be squeezed further as we toggle between bespoke feeds and synthetic dialogue. We will need new habits for deep work, or we will lose the capacity for it.
On provenance, an industry standard is emerging: the C2PA’s Content Credentials are being rolled into cameras, apps, and platforms, with Adobe’s toolset now in public beta (Adobe; Google).
Culture and media, rebuilt
Culture will not be spared. Studios already license likeness, shoot on green screens, and apply digital makeup. The next step is not the death of the blockbuster but its reconfiguration: cheaper pipelines, faster post-production, more creative latitude. A carpenter who built sets will ply her craft elsewhere; a young actor will lend his body to an older star’s face; a writer will draft with a machine and still need to rewrite. Stranger still will be the personal turn: the five-minute cut tailored to your memories; a classroom where Einstein answers back; the voice of a lost parent rendered with eerie accuracy. Marvelous—and disorienting—unless we decide who sets the values of those digital beings.
Labor rules are catching up. The SAG-AFTRA agreements now define consent and compensation rules for digital replicas in film/TV and commercials, while studios and venues experiment with AI-enhanced restoration and virtual production (Reuters).
Moats: learning loops vs. patents
Markets will reorganize around a simple truth: in software, the moat is the learning loop. Ship, watch, learn, improve—repeat. A few months of steeper learning can lock a market. Brand still matters, but loyalty is weaker when switching costs are low and tools improve weekly. Synthetic, domain-specific data helps only if it steepens the learning curve. In hardware, moats look old-fashioned: patents, fabrication, supply chains you cannot copy overnight. Expect another crop of consumer-scale giants built on loops; expect slower, more consolidated progress in enterprise and government, where feedback comes slowly.
Fairness: compute for universities
If there is one fairness intervention with quick payoff, it is compute for universities. Industry can afford thousand-GPU clusters by the building; campuses scrape for hundreds across an entire faculty. You don’t need a billion-dollar lab to change a field. $1–2 million clusters and shared national facilities would unlock a great deal of talent. Philanthropy and public funding did this for generations of scientists; they can do it again. The return shows up in taxes paid and companies founded later.
In the U.S., the National Science Foundation’s NAIRR pilot is widening access to compute and datasets, with the first projects already allocated time on federal machines (NSF announcement).
Purpose in an age of abundance
All of this returns to an old question: what is purpose in an age of abundance? The risk is not a cinematic apocalypse but drift—an erosion of judgment and agency because it is so easy to ask the machine to act for us. Humans need friction. We need to attempt and fail, to make and mend. Many people in modest roles find meaning simply in going to work. The task is not to abolish those roles but to augment them—so a driver becomes a fleet optimizer, a line worker becomes a line designer, a clerk becomes a fixer of systems. There will still be criminals; there will still be lawyers. The struggle between harm and restraint persists, with new tools on both sides.
Education and judgment
Education should adjust accordingly. Teaching people how to use AI is necessary; teaching taste may matter more. When you can make almost anything, the questions become: what should you make, and why? Aesthetics, ethics, and judgment won’t be optional—they will be the scarce goods—because execution is growing cheap. Philosophers remind us that not everything that counts can be counted. In a world where the system can carry out a plan once you understand it, understanding becomes the work. Framing the problem may be enough; the machine will handle the rest. That is not an end to purpose. It is a new form of it.
The choice before us
We are not condemned to drift, nor guaranteed abundance. A “polymath in every pocket” is within reach; so is a world of high convenience and low agency. The difference will be made by policies that seem dull until they save us: build power before we need it; make chips and training runs visible enough to deter panic; gate the hottest tools in biology and cyber; teach at scale on the devices people already have; fund public research so discovery does not belong only to the richest firms; sketch deterrence norms before the first emergency. We can do these things quietly, competently, and soon.
A closing reminder
If the twentieth century taught anything, it is that great powers avoid catastrophe not by eloquence but by plumbing: the pipes that carry electricity, the protocols that carry signals, the habits that carry trust. Superintelligence will not decide between abundance and drift. We will—by the systems we build around it, and by whether we remember that the point of a powerful tool is not to make us smaller, but to give us back the work only we can do.