Nvidia’s investment footprint has grown alongside its control of AI chips, and the company is now a crucial investor influencing where cutting-edge models, tools and calculations are built. According to a PitchBook tally, Nvidia has been involved in about 67 venture capital deals over the past year, up from 54 in the prior period, not including its formal venture capital arm, NVentures. The thesis is simple: support the “game changers and market makers” who are “expanding the AI pie,” as well as supporting demand for Nvidia’s platforms.
What is unusual about this investment frenzy is not only its scale, but also how capital, computation and customers are intertwined. There are many similar large transactions that include off-balance sheet purchase agreements for Nvidia-powered systems, converting equity checks into real long-term demand for GPUs, networking and software. It is a vertical strategy, played out in public with stakes in modeling labs, data infrastructure, chips, robots and even energy.

A plan to take ownership of the AI stack through strategic bets
Nvidia’s product portfolio aligns perfectly with the levels at which AI is gaining value. He has participated in public-facing mega rounds, deeply technical rounds OpenAI, Anthropic, Menlo Park-based xAI, and seed rounds for Mistral, Reflection AI, Thinking Machines Lab, Imbue, and Reka AI at frontier models. These bets also concern scaling compute-intensive workloads; these are included in the company’s most recent platform roadmaps.
At the developer and application level, Nvidia has supported Cursor and Poolside for AI coding, Perplexity for AI search and generative media players like Track and the German laboratories in the Black Forest. LLMs and enterprise tools are funded by Cohere, Together AI, Weka, Scale AI and Kore.ai – things that revolve around data pipelines, model customization and GPUs at scale.
Infrastructure is another pillar. Nvidia has funded GPU cloud operators and data center builders such as CoreWeave, Lambda, Crusoe, Nscale, and Firmus Technologies (follow us on Twitter at @TelecomDC for more Intel-AMD-Nvidia news). Ayar Labs and Enfabrica have helped achieve breakthroughs in bandwidth and interconnection technologies. The objective is clear: to ensure that the capacity is there and that it is adapted to the Nvidia ecosystem.
Track the money and math behind AI deals
The headline numbers tell a story of scale. And Intel isn’t the only chipmaker around which OpenAI coalesced in 2019: Nvidia invested for the first time in a $6.6 billion round and also signed a framework agreement promising to coordinate future infrastructure investments, according to company disclosures and coverage by high-end outlets. As part of a broader deal, it agreed to invest up to $10 billion in Anthropic, which has outlined plans to spend billions of dollars on cloud computing, including Nvidia-based systems. There would also be comparable deals around xAI, in which shares would be exchanged for more Nvidia equipment.
Mistral AI has raised a two billion dollar fundraising round with Nvidia, supporting open models in Europe. Cursor landed a multibillion-dollar Series D at a sky-high valuation, and Nvidia went from customer to shareholder as code wizards, instead of add-ons, became the norm for software teams. Cohere’s Series D brought its valuation to nearly $7 billion, validating Nvidia’s enterprise LLM bet.

On the infrastructure scene, Crusoe has raised around $1.4 billion, at a valuation of $10 billion, to build AI data centers; Nscale and Firmus strengthen capabilities related to large-scale model deployment; Lambda funding fuels additional GPU cloud capabilities; and prior support from CoreWeave confirms Nvidia’s early role in the rise of specialized GPU clouds. In applied AI, Figure AI becomes a second billion-dollar leader with a reported valuation of $39 billion, and Waabi and Nuro have advanced autonomy with Nvidia as repeat backer – even though Nuro has seen its perceived value drop about 30% from all-time highs, an interesting reminder of how value can be captured non-uniformly in the AI space.
The Manual Behind Checks and Partnerships
It’s less about spraying and praying and more about systems design. Investments tend to be associated with alignment from a technical perspective: CUDA, network topologies and now at the platform level (like Grace Blackwell). Startups benefit from faster access, technical support, and credibility with enterprise buyers. This in turn improves visibility into future demand, co-designs workloads that showcase its latest silicon, and creates software moats around SDKs and inference stacks.
Reports from PitchBook and Bloomberg suggest that a growing share of rounds involve “circular” elements – equity that goes to finance the very infrastructure that the startup will eventually consume. In more practical terms, Nvidia is funding both sides of the AI boom: not only does it provide the picks and shovels, but it also now owns a good portion of all the hard-working miners.
Where are the risks in Nvidia’s AI investment web
Coupling supplier power with ecosystem ownership raises questions. Competitors may chafe at what they see as a preferential distribution of shares, and regulators may be interested in the questions of fairness, supply, and exclusivity that arise when a GPU market is constrained. There is also a risk of classic risk: not all laboratories and/or applications will be able to cumulatively maintain their valuation, just as management changes, acquisition-type transactions or market pivots rewrite the trajectories of portfolio companies. The Inflection saga – speed to scale, then pivot in talent and IP licensing that took us back to the roadmap drawing board – is a cautionary tale.
Another is energy and networks. Demand for power, cooling and bandwidth is driving AI deeper into fusion, optics and networking partnerships. Nvidia’s investments in Commonwealth Fusion and Ayar Labs suggest that advances in computing may be controlled less by cores and more by electrons and photons.
What to watch next as Nvidia deepens its AI bets
Look for more capital around building data centers in energy-rich geographies, placing greater emphasis on small-footprint models running efficiently on next-generation accelerators, and more enterprise platforms moving from co-pilots to fully agent-based workflows. As NVentures and corporate investments move forward in tandem, Nvidia’s real advantage may be the same one that led it to become the go-to AI chip supplier: its ability to co-architect the future alongside the startups trying to invent it.
