Show summary Hide summary
Mira Murati’s AI startup, Thinking Machines Lab, has forged a multi-year alliance with chipmaker Nvidia that could shift how leading AI projects secure raw compute. The agreement — announced Tuesday — ties the two companies on hardware deployments and includes a direct strategic investment by Nvidia, with at least part of the deal activating hardware installs beginning in 2027.
The essentials of the partnership
Terms were not disclosed, but the announcement specifies that Thinking Machines Lab will deploy a minimum of one gigawatt of Nvidia’s latest infrastructure, the Vera Rubin systems, starting in 2027. Nvidia also said it will make a strategic equity investment in the two-year-old lab.
California targets luxury car tax dodgers using Montana registrations: owners face big fines
Anonymous social app enters Saudi market: can it survive strict censorship?
The startup, led by former OpenAI executive Mira Murati, has raised more than $2 billion since its February 2025 launch from backers including Andreessen Horowitz, Accel, Nvidia and the venture arm of rival chipmaker AMD. Public filings and investor statements put the company’s valuation above $12 billion.
- Hardware commitment: Minimum deployment of Nvidia Vera Rubin systems beginning 2027.
- Financial move: Nvidia will take a strategic stake in the company.
- Business focus: Thinking Machines is building models that emphasize reproducibility and released its first product, Tinker, last fall.
- Leadership shifts: Several early co‑founders have left for roles at larger firms in the past year.
Why this matters now
Access to large-scale GPU or accelerator fleets has become a gating factor for AI development. Securing agreements that guarantee future hardware — and a close integration with a vendor’s architecture — can accelerate model training timelines and reduce the lead time for production deployments.
For Thinking Machines, the deal supplies committed access to top-tier compute capacity and a deeper technical collaboration around training and serving systems. For Nvidia, it reinforces the company’s role as the dominant supplier of AI infrastructure and ties a fast-growing research lab directly to its ecosystem.
Market context and risks
AI firms are competing fiercely for capacity. Nvidia CEO Jensen Huang has estimated that businesses could spend trillions on AI infrastructure by the end of the decade, underscoring why early claims on hardware matter. Large, multi-year arrangements can concentrate demand and influence pricing and availability for other organizations.
At the same time, fast-moving personnel changes at Thinking Machines — including departures of several co-founders to Meta and OpenAI — highlight the churn inside the AI talent market. That turnover can complicate the path from research to reliable products even with ample hardware.
What to watch next
Key open questions remain: the exact size and structure of Nvidia’s investment, the detailed rollout schedule for the Vera Rubin systems, and how tightly Thinking Machines will bind its software stack to Nvidia’s architecture. The company declined to provide details beyond its public announcement.
Potential implications for the broader industry include tighter vendor lock-in for some models, faster timelines for labs with precommitted compute, and pressure on smaller teams that lack long-term hardware guarantees.
Reported comparisons are already circulating: in recent years, other major AI players have pursued large compute partnerships — some reported as very large headline figures — illustrating how critical access to accelerators has become in determining who can build and scale advanced models.
As the partnership moves toward the 2027 deployment window, the balance between hardware supply, software portability and organizational continuity will determine whether deals like this widen the lead of well‑funded labs or simply redistribute compute power within the next wave of AI development.












