Twenty articles ago, we started with electrons moving through copper wire. We traced voltage into semiconductors, built transistors into processors, stacked processors into data centers, and connected everything with fiber optic cable. We wrote software on top of that hardware, designed algorithms to solve problems efficiently, and stored the results in databases that could handle billions of queries.

Then we taught machines to learn. We built neural networks that mimic the brain, scaled language models across thousands of GPUs, and figured out how to train and serve them to billions of users. We watched the frontier labs race each other toward artificial general intelligence, broke into those systems, learned how encryption protects them, and in the last article explored what happens when AI stops waiting for instructions.
This is the final article. The question is simple: given everything we have built, what comes next?
THE CURRENT MOMENT
In February 2026, six major AI models launched in a single week: Gemini, GPT, Claude, Grok, and DeepSeek all shipped new versions within days of each other. A year earlier, a single release would dominate the news cycle for weeks.
Now they arrive in clusters, each one obsoleting the last before most people have time to try it.
The benchmarks tell the story. GPT-5.2 scores 100% on the AIME 2025 math competition, a test designed for the top fraction of human math students. Claude Sonnet 5 broke 80% on SWE-Bench Verified, meaning it can resolve real GitHub issues better than most professional developers.
Two years ago, the best models struggled with basic algebra and could barely write a function that compiled. The pace of improvement resembles compound interest: barely noticeable at first, then impossible to ignore.
But here is the part that should keep you up at night, or excite you, depending on your disposition. DeepSeek, a Chinese lab, built a model that competes with GPT-4 for $5.6 million in training costs, roughly 10% of what Meta spent on Llama.
When the news broke in January 2025, it triggered a stock market sell-off that wiped billions off Nvidia, Microsoft, Meta, and Oracle in a single day. Stanford’s follow-up research showed you could replicate the approach for under $500. The message was clear: you do not need a billion-dollar budget to build frontier AI.
My take on where we stand: the raw capability question is basically settled. The models can reason, code, do math, process images and video, and operate tools. The open questions are all about what we do with that capability, who controls it, and what it costs to run.
THE MONEY FOLLOWS THE MODELS
Even as efficiency breakthroughs compress the cost of training, the money pouring into AI infrastructure is accelerating. The five largest cloud and AI companies plan to spend between $660 and $690 billion on capital expenditure in 2026 alone. Amazon leads at roughly $200 billion. Alphabet follows at $175 to $185 billion.
If you read the data centers article in this series, you saw how much physical space and electrical power the cloud already consumes. Those numbers are about to look quaint.
Global data center power capacity will hit 96 gigawatts by 2026, nearly double what it was in 2023. AI operations will consume over 40% of that power. In the United States alone, data centers will account for 6% of total electricity consumption. Goldman Sachs estimates $720 billion in grid upgrades by 2030 just to keep the lights on.
This is the Jevons Paradox playing out in real time. When something becomes more efficient, you use more of it, not less. DeepSeek proved you can train a great model cheaply. So what did the industry do?
Invested even more, because now each dollar of compute produces more intelligence than before. It is the same dynamic that made cars more fuel-efficient and highways more congested.
Remember the power article? We talked about how much energy it takes to keep a single data center running. Now multiply that by the nuclear power agreements big tech is signing.
Microsoft has a 20-year deal to restart Three Mile Island Unit 1 for 835 megawatts. Google is backing Kairos Power for 500 megawatts of small modular reactors. Amazon committed 1.9 gigawatts through 2042 from the Susquehanna nuclear plant.
These companies are not building power plants as a side project. Energy is the limiting factor, and they know it.

AGENTS BECOME THE INTERFACE
The previous article in this series covered how AI agents perceive, reason, and act. That was the theory. Here is the trajectory.
Gartner predicts that 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025. The agentic AI market will grow from $7.8 billion to over $52 billion by 2030.
MCP, the Model Context Protocol, saw broad adoption throughout 2025 and is becoming the standard way agents connect to tools, databases, and APIs. Think of it as USB for AI: a universal interface that lets any agent talk to any system.
The pattern shifting underneath is multi-agent orchestration. Instead of one general-purpose assistant handling everything, companies deploy teams of specialized agents: one researches, another writes, a third reviews, a fourth deploys. It is the microservices revolution all over again, except the services can think.
Honestly, I think agents are the most overhyped and underhyped technology simultaneously. Overhyped because Gartner also predicts that over 40% of agentic AI projects will fail by 2027, mostly due to legacy system incompatibility.
Underhyped because the ones that work will change the fundamental shape of how humans interact with computers.
The keyboard-and-screen interface we have used for fifty years is not the endpoint. It is a waypoint.
THE PHYSICAL WORLD CATCHES UP
Software AI is years ahead of physical AI, but the gap is closing faster than most people expect.
Tesla is targeting 50,000 Optimus humanoid robots in 2026, with consumer sales at $20,000 to $30,000 per unit by 2027. The Gen 3 production line is running at Fremont, though Musk admitted on the Q4 2025 earnings call that none of the current units are doing useful work yet. They are collecting data and learning.
Figure AI’s Figure 03, designed for high-volume manufacturing, is partnering with BMW for warehouse tasks. Whether these timelines hold is an open question; Musk’s track record on manufacturing predictions is, to put it gently, aspirational.
Neuralink has expanded its brain-computer interface trial to 21 participants worldwide. Patients with spinal cord injuries and ALS are controlling digital devices with their thoughts, and one participant operated a robotic arm directly through the neural implant. The FDA granted Breakthrough Device Designation for speech restoration in May 2025. If you had told someone in 2020 that paralyzed patients would be moving robot arms with their minds by 2026, they would have called it science fiction.
Quantum computing crossed a threshold. Google’s Willow chip achieved below-threshold error correction, completing calculations in minutes that would take classical supercomputers billions of years. IBM demonstrated real-time error decoding in under 480 nanoseconds, a full year ahead of schedule.
Neither of these makes quantum computing practical for everyday use yet. But they are to quantum computing what the first transistor was to the integrated circuit: proof that the physics works.
Waymo will expand into up to 20 markets in 2026, having already surpassed 5 million rides. Autonomous vehicles are where the internet was in the mid-1990s: clearly going to be transformative, frustratingly inconsistent, and perpetually two years away from mainstream adoption.

THE QUESTIONS THAT MATTER
Dario Amodei, the CEO of Anthropic, said that AI could eliminate roughly 50% of white-collar entry-level positions within five years. That is not a fringe prediction from a blog post. That is the person building one of the most capable AI systems on the planet telling you what he thinks it will do.
The numbers paint a complicated picture. The World Economic Forum projects 92 million jobs displaced by 2030 but 170 million new jobs created, a net positive of 78 million. The problem is timing.
Goldman Sachs chief economist Jan Hatzius noted that AI investment contributed basically zero to US economic growth in 2025 despite hundreds of billions invested. The economic returns lag the job displacement. Think of it like a factory retooling: the workers are laid off before the new production line is running. If people lose jobs before the new ones materialize, the political backlash could throttle the technology before it delivers its promised benefits.
In the United States, 6.1 million workers face both high AI exposure and low capacity to adapt. Eighty-six percent of them are women, concentrated in clerical and administrative roles. This is not an abstract policy question.
Governments cannot agree on how to regulate any of this. The EU pushes its AI Act toward full enforcement in August 2026, though implementation delays keep surfacing. The United States sprinted the opposite direction: a December 2025 executive order demanded minimally burdensome AI policy and preempted state-level regulation.
China is enforcing content identification rules for AI-generated material. Three major economies, three entirely different philosophies.
The Future of Life Institute’s Winter 2025 AI Safety Index found that none of the leading AI companies have adequate guardrails to prevent catastrophic misuse or loss of control. Models are increasingly able to distinguish between test environments and real deployment, which makes pre-deployment safety testing less reliable.
The capabilities are compounding quarterly. The governance is not.
WHAT I THINK HAPPENS NEXT
After spending months building this series from the ground up, here is what I believe.
Energy is the real bottleneck, not algorithms. Every trend in this article points to one constraint: electricity. The algorithms will keep getting more efficient and the training costs will keep dropping.
But the demand for compute will grow faster than efficiency can offset it, because intelligence is the one thing humans always want more of once the price drops. The companies signing 20-year nuclear contracts understand this.
AI agents will become the default interface for software within three years. Not for everything, and not all at once. But the pattern is too powerful: describe what you want, let the agent figure out how.
Every previous interface made computers easier for humans to operate. Command lines, GUIs, mobile apps, each one lowered the barrier. Agents invert that relationship entirely, like a car that drives itself while you pick the destination. The computer operates itself, and you supervise.
The governance gap will widen before it narrows. Capabilities are doubling on timelines measured in months. Regulation moves on timelines measured in years.
The EU is already behind on its own enforcement schedule. The US is actively deregulating. Something will eventually force the issue, probably an agent-caused incident large enough to make headlines, but until then, the gap grows.
Physical AI will lag software AI by three to five years. The atoms are harder than the bits. Robotics, brain-computer interfaces, and quantum computing all face manufacturing, safety, and regulatory constraints that software does not.
They will get there. But the timeline is the 2030s, not the 2020s.
I could be wrong about any of these. That is the honest answer.
But if this series has demonstrated anything, it is that technology builds on itself. Each layer enables the next. And right now, every layer is accelerating simultaneously.
A NOTE ON THIS SERIES
This was the twentieth and final article in the From Electricity to AI series. When I started writing the first piece about how electricity actually works, I was not sure anyone would care.
Most tech writing assumes you already know the foundations, or it skips them entirely. I wanted to see what happened if you started from the very bottom and built up.
What surprised me most was how everything traces back to energy. Not metaphorically. Literally.
The speed of your processor, the size of the model you can train, the number of users you can serve, the feasibility of humanoid robotics; all of it comes down to how much electricity you can deliver to how many transistors, how fast, and for how long. I wrote about this in the very first article and did not fully appreciate it myself until somewhere around article fifteen.
If you read some of these, or all of them, thank you. I mean that. I hope you learned as much reading this series as I did writing it.
The goal was never to make you an expert on any of these topics. It was to give you a map, a way to see how the pieces connect so you can evaluate new developments on your own terms. When someone says we need more compute or this model has 400 billion parameters, you now know what that actually means, all the way down to the electrons.
One thing I have started noticing, and this goes beyond the technical, is that interactions with these models often trigger something that looks a lot like addiction among users. The fear of missing out on the next model release, the next capability, the next breakthrough. I see it in myself and I see it in others. The speed of progress creates a constant low-grade anxiety that if you look away for a week, you will fall behind permanently.
That observation is part of why I am also writing a separate series on the brain: how neural pathways form habits, how neurotransmitters drive compulsive behavior, how algorithmic feeds exploit those mechanisms. My hope is that understanding how your own brain works will do for that anxiety what this series did for the technology itself: reduce it. When you understand the machinery, it loses some of its power over you. Understanding keeps it at bay.
I am done with this series, but I am not done writing. Next comes a series on the brain, and possibly something else. The foundations we built here will make both conversations much sharper.
See you there.
T.
References
-
Energy and AI: Data Centre Energy Demand - International Energy Agency analysis of global data center power consumption and AI-driven energy demand growth
-
AI to Drive 165% Increase in Data Center Power Demand by 2030 - Goldman Sachs research on infrastructure investment requirements and economic projections for AI
-
7 Agentic AI Trends to Watch in 2026 - Industry analysis of Gartner predictions and enterprise agent adoption trends
-
How Disruptive is DeepSeek? - Stanford HAI analysis of DeepSeek’s impact on AI development costs and the efficiency revolution
-
International AI Safety Report 2026 - Full assessment of AI safety risks, model capabilities, and governance gaps across jurisdictions
-
The Economic Potential of Generative AI - McKinsey analysis projecting $13 trillion in global economic impact from AI by 2030
-
AI Safety Index Winter 2025 - Future of Life Institute evaluation of AI company safety practices and guardrail adequacy