Welcome to The Closer, where we reveal how power really works—the hidden dynamics, the unwritten rules, the patterns that repeat across industries and eras.
On January 12, 2026, Mark Zuckerberg announced Meta Compute, a new infrastructure initiative designed to build "tens of gigawatts this decade, and hundreds of gigawatts or more over time." The entire US AI industry currently consumes about 5 gigawatts. Zuckerberg is building for an AI future many times larger than today's reality.
Two weeks earlier, while most of Silicon Valley was nursing holiday hangovers, he had pulled the trigger on a $2 billion bet that nobody saw coming. The target? A Singapore-based AI startup called Manus that most people had never heard of. Founded by a 33-year-old Chinese entrepreneur named Xiao Hong, Manus had been racing to raise money at a $2 billion valuation when Meta swooped in and bought the entire company outright.
Zuckerberg's competitors are debating transformer architectures and worrying about burning through venture capital. He's embraced a different philosophy: brute force. Spend whatever it takes. Build whatever infrastructure is needed. Buy whatever talent and technology you can't develop in-house. The cost of missing AI is existential. The cost of overspending is just money.
Call it the Zuckerberg Way: Overspend, never underspend. Buy what you can't build. Pivot without apology. Outcompute everyone.
Every Friday, we go deep on one story that reveals how power really works. Subscribe to never miss an edition of the Back Channel.

The $2 Billion Monday
The Manus acquisition tells you everything about Zuckerberg's approach to artificial intelligence. Here was a company that had launched just ten months earlier, built by a team that had recently laid off most of its Beijing staff and relocated to Singapore to distance itself from China. Yet somehow, this scrappy startup had processed 147 trillion tokens of text and data, supported 80 million virtual computers, and generated over $100 million in annualized revenue.
Manus had built something Meta desperately needed: AI agents that could actually execute complex tasks. While ChatGPT could write you a poem or summarize a document, Manus agents could conduct market research, write and debug code, and analyze vast datasets. The company claimed its agents outperformed OpenAI's Deep Research feature—a bold assertion that Zuckerberg's team apparently believed.
The deal closed on December 29. No lengthy due diligence. No elaborate courtship. Manus was seeking $2 billion in venture funding; Meta offered to buy the whole company for more than that.
"The goal of the acquisition is to give Meta's existing platforms 'a bit of a brain transplant,'" independent technology analyst Carmi Levy told reporters. Brain transplant. That's the urgency driving Zuckerberg. He's not trying to incrementally improve Meta's AI capabilities. He's trying to rewire the company's neural pathways entirely.
The $70 Billion Mistake
Zuckerberg built his career on pattern recognition—spotting what works, copying it ruthlessly, using Meta's distribution advantages to win. When Snapchat threatened Instagram with disappearing stories, he didn't try to out-innovate Evan Spiegel. He copied the format and used Facebook's user base to drive adoption. When TikTok emerged, Meta responded with Reels.
"Move fast and break things" was more than a company motto. It was a philosophy of power: identify what's working elsewhere, replicate it quickly, leverage your distribution. Until late 2021, it worked.
That's when Zuckerberg made the boldest bet of his career: rebranding Facebook as Meta and going all-in on the metaverse. The vision was intoxicating—a virtual world where people could work, play, and socialize in immersive 3D environments. He wasn't copying anymore. He was trying to create the next computing platform from scratch.
The results reflect a brutal failure. Reality Labs has logged over $70 billion in cumulative losses since late 2020. In Q3 2025 alone, the division lost $4.4 billion on just $470 million in sales. That loss rate would have killed almost any other company.
But Zuckerberg did something most CEOs can't: he admitted the mistake and pivoted. While others would have doubled down—too much ego, too much sunk cost—he began quietly shifting resources to AI as early as 2024.
The pivot became official in January 2026, when Meta laid off 1,500 employees from Reality Labs—about 10% of the division's workforce. Three VR game studios were shuttered: Twisted Pixel, Sanzaru Games, Armature Studio. The company deprioritized VR headsets to focus on AI-powered wearables like the Ray-Ban smart glasses, which saw sales triple in the first half of 2025.
Most telling: Vishal Shah, who had led Meta's metaverse efforts for four years, was reassigned to VP of AI products in October 2025. The metaverse champion was now building artificial intelligence.
The Brute Force Philosophy
The metaverse failure taught Zuckerberg something about technological transitions: the biggest risk isn't overspending on the future. It's underspending and getting left behind.
In a September 2025 podcast interview, Zuckerberg laid it out: "If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously. But what I'd say is I actually think the risk is higher on the other side. If you build too slowly, and superintelligence is possible in three years but you built it out assuming it would be there in five years, you'll be out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation in history."
OpenAI burns through venture funding and worries about its next round. Anthropic carefully manages its research budget. Zuckerberg has a massively profitable advertising business that can subsidize unlimited experimentation.
The numbers are staggering. Meta has committed $72 billion to AI infrastructure in 2025 alone, with plans to spend even more in 2026. The company has pledged $600 billion for US data centers through 2028. And with Meta Compute, Zuckerberg is building enough electricity capacity to power the AI economy of the future.
Not everyone is convinced. "Big Short" investor Michael Burry warned on X the day Meta Compute was announced: "Meta gives in, throwing away its one saving grace. Watch ROIC crash." Burry fears Meta is abandoning its ability to generate enormous profits without sinking vast sums into physical infrastructure—shifting toward a capital-intensive model that could drag down returns and make the company look more like a utility.
But since Meta has already poured tens of billions into AI data centers and signaled hundreds of billions more in long-term infrastructure commitments, that train left the station long ago.

The Talent Machine
While infrastructure spending grabbed headlines, Zuckerberg's talent acquisition strategy told the real story. When you can't build fast enough, you buy what you need.
The Scale AI deal in June 2025 set the tone. Meta invested $14.3 billion for a 49% non-voting stake in the data labeling company, while recruiting Scale's founder and CEO, 28-year-old Alexandr Wang, to join Meta as Chief AI Officer leading the new Superintelligence Labs division. Wang had built one of the most successful AI infrastructure companies in Silicon Valley. Zuckerberg wanted that expertise inside Meta.
The Manus deal followed the same pattern. Beyond the AI agents technology, Meta was acquiring Xiao Hong's team and their expertise in building general-purpose AI systems. Manus had achieved over $125 million in revenue run rate just months after launching. They understood something about commercializing AI that Meta needed to learn.
The most intriguing hire was Daniel Gross, co-founder of Safe Superintelligence alongside Ilya Sutskever—the former OpenAI co-founder who briefly led the effort to oust Sam Altman. Gross left SSI in late June 2025 to join Meta, where he now leads long-term capacity strategy and supplier partnerships for Meta Compute. Zuckerberg recruited one of the world's leading experts on AI safety and superintelligence to help plan his infrastructure buildout.
In mid-January, Gross posted on X that he's hiring people with backgrounds in "deep learning, supply chains, commodities, semiconductors, sovereigns, energy, Excel, prediction markets, monitoring situations." The list reveals Meta's ambition to build an entire economic infrastructure to support AI computation.
Meta wasn't competing in today's AI market. It was positioning for AGI.
The China Angle
The Manus acquisition showed Zuckerberg's pragmatic approach to geopolitics. A company founded in China by Chinese entrepreneurs, backed by Tencent and HongShan Capital (formerly Sequoia China). Normally, acquiring something like this would trigger immediate national security concerns.
Zuckerberg's team had done their homework. Manus had already distanced itself from China throughout 2025, laying off approximately 80 Beijing employees in July with generous severance and moving headquarters to Singapore in June. Meta committed to winding down Manus's remaining China operations and ensuring "no continuing Chinese ownership interests" after the deal closed.
China's Ministry of Commerce announced an export control probe on January 8, 2026. On January 23, Beijing deepened its investigation, now examining potential violations of rules governing cross-border currency flows, tax accounting, and overseas investments. The central question: whether Chinese technology or user data could have been compromised or shared with an American company.
The investigation is already having consequences. Some Manus customers have begun fleeing the platform over data privacy concerns, worried about what Meta's ownership means for their information. Whether the deal ultimately survives Beijing's scrutiny remains an open question.
Zuckerberg had structured the deal to minimize geopolitical risk while maximizing technological gain. But in the new era of US-China tech competition, even the most careful planning may not be enough.
The Infrastructure Play
While competitors focused on building better models, Zuckerberg is focusing on the infrastructure layer. Meta Compute is as much about data centers as about creating a vertically integrated AI stack that no competitor could match.

The team he assembled showed how serious he was. Santosh Janardhan, with Meta since 2009, took charge of technical architecture, software stack, silicon programs, and the entire data center fleet. Daniel Gross handled long-term capacity strategy. And Dina Powell McCormick—Meta's newly appointed president and vice chairman, a former Deputy National Security Advisor under Trump and Assistant Secretary of State under Bush—would work with governments on AI infrastructure deployment.
The nuclear deals announced in January 2026 showed the scale of his thinking. Meta signed agreements with Vistra, TerraPower, and Oklo for up to 6.6 gigawatts of nuclear power—enough to power roughly 5 million homes. This wasn't about meeting today's energy needs. It was about locking in reliable electricity for the massive computing infrastructure he planned to build over the next decade.
AI models require enormous computation. Computation requires enormous electricity. By controlling the entire energy-to-inference pipeline, Meta could achieve cost advantages impossible for competitors to match.
The Superintelligence Lab
Zuckerberg structured Meta's advanced research differently than you'd expect from a public company. Rather than a traditional R&D division with quarterly milestones and budget constraints, he established Meta Superintelligence Labs with a flat hierarchy and no top-down deadlines.
The philosophy: make "compute per researcher" a competitive advantage. While OpenAI and Anthropic carefully allocate GPU resources across projects, Meta's researchers can access virtually unlimited computing power. A promising experiment needs 10,000 GPUs for a month? Meta can provide them.
At Davos on January 21, CTO Andrew Bosworth revealed the first results: the Superintelligence Labs team has delivered its first key AI models internally, just six months into its work. Bosworth called the early results "very good"—a sign that Zuckerberg's brute force approach may be gaining traction.
Zuckerberg's July 2025 manifesto on "Personal Superintelligence" made his vision clear: "Meta's vision is to bring personal superintelligence to everyone… distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work."
And his approach to talent: "If you're going to be spending hundreds of billions of dollars on compute, it really makes sense to compete super hard and do whatever it takes to get the 50 or 70, or whatever it is, top researchers."
THE ZUCKERBERG WAY
"The risk, at least for a company like Meta, is probably in not being aggressive enough rather than being somewhat too aggressive."
1. Overspend, don't underspend. The cost of missing AI is existential. The cost of overspending is just money.
2. Buy what you can't build. When you're behind, acquire your way to relevance.
3. Pivot without apology. $70 billion in losses? Cut and move on. No sentiment.
4. Outcompute everyone. Make compute your moat. Build gigawatts while others build models.
The Reckoning
Zuckerberg has assembled the resources—financial, technical, human—to compete at the highest levels of AI development. He's building infrastructure for AI systems far more advanced than anything available today. He's recruited some of the world's leading AI researchers.
The question: Can brute force overcome the uncertainties of frontier research?
This isn't social media, where network effects and distribution advantages could overcome technical limitations. In AI, model quality matters. Research insights are irreplaceable. Algorithm efficiency determines whether a breakthrough is commercially viable or prohibitively expensive.
Zuckerberg is betting that AI success correlates with resources—that enough computing power, enough talented people, enough capital to iterate rapidly will eventually produce the best systems. It's a reasonable hypothesis. Larger, more expensive models do tend to perform better.
OpenAI and Anthropic are betting otherwise: that breakthrough research requires focus, careful resource allocation, deep technical insights that can't be purchased. Throwing money and compute at the problem isn't guaranteed to produce results.
My read: Zuckerberg won't win on research breakthroughs. His best researchers will always have one eye on the stock price. But he doesn't need to win on research. He needs to win on deployment, distribution, and durability. If superintelligence arrives gradually—through incremental improvements rather than sudden breakthroughs—then the company with the most infrastructure, the most users, and the deepest pockets will capture most of the value. As a huge fan of Manus myself, the decision by Zuckerberg to acquire it was a sign that Meta is going to sweep in with some extraordinary offerings if these teams are given free rein and the incentives to build extraordinary things.
If you’ve read this far, we want to hear from you! We hope The Backchannel is becoming an essential part of your week – an insider guide to the deals that are changing our world. Ideas, feedback, or just want to chat? Please get in touch. Until next time, keep making moves and thinking outside the box. Deals change the world – and so can you.