new era Silicon Valley runs on networking—and not the kind you find on LinkedIn.
As the tech industry pours billions of dollars into AI data centers, chipmakers both large and small are increasing innovation around the technology that connects chips to other chips and server racks to other server racks.
Networking technology has existed since the beginning of computers, connecting mainframes seamlessly so they could share data. In the world of semiconductors, networking plays a role at almost every level of the stack – from the interconnects between transistors on a chip, to the external connections made between boxes or racks of chips.
Chip giants like Nvidia, Broadcom and Marvell already have well-established networking. But in the AI boom, some companies are looking for new networking approaches that help them flow large amounts of digital information through data centers. That’s where deep-tech startups like Lightmatter, Celestial AI, and PsiQuantum come in, which use optical technology to accelerate high-speed computing.
Optical technology, or photonics, is entering a new era. According to Pete Shadbolt, co-founder and chief scientific officer of PsiQuantum, the technology was considered “weak, expensive and marginally useful” for 25 years, until the AI boom sparked interest in it. (Shadbolt appeared on a panel last week that WIRED co-hosted.)
Some venture capitalists and institutional investors, hoping to catch the next wave of chip innovation or at least find a suitable acquisition target, are investing billions in startups that have found new ways to speed up data throughput. He believes that traditional interconnect technology, which relies on electrons, cannot keep pace with the growing need for high-bandwidth AI workloads.
“If you look back historically, covering networking was really boring, because it was switching packets of bits,” says Ben Bajarin, a longtime technology analyst who serves as CEO of research firm Creative Strategies. “Now, because of AI, it’s having to move much stronger workloads, and that’s why you’re seeing innovation around speed.”
big chip energy
Bajarin and others credit Nvidia with being foresight about the importance of networking when it made two major acquisitions in the technology years ago. In 2020, Nvidia spent about $7 billion to acquire Israeli firm Mellanox Technologies, which makes high-speed networking solutions for servers and data centers. Shortly afterward, Nvidia purchased Cumulus Networks to power its Linux-based software systems for computer networking. This was a turning point for Nvidia, which rightly said that the GPU and its parallel-computing capabilities would become more powerful when clustered with other GPUs and placed in data centers.
While Nvidia dominates in vertically-integrated GPU stacks, Broadcom has become a major player in custom chip accelerators and high-speed networking technology. The $1.7 trillion company works closely with Google, Meta and most recently OpenAI on chips for data centers. It is also at the forefront of silicon photonics. And last month, Reuters reported that Broadcom is preparing a new networking chip called Thor Ultra, designed to provide a “critical link between AI systems and the rest of the data center.”
On its earnings call last week, semiconductor design giant ARM announced plans to acquire networking company DreamBig for $265 million. DreamBig, in partnership with Samsung, creates AI chiplets—small, modular circuits that are designed to be packaged together into larger chip systems. The startup has “interesting intellectual property … which [is] Scale-up and scale-out is very important for networking,” ARM CEO René Haas said on the earnings call. (This means connecting components and sending data up and down a chip cluster, as well as connecting racks of chips with other racks.)
light on
Nick Harris, CEO of Lightmatter, has pointed out that the amount of computing power required for AI now doubles every three months – much faster than Moore’s Law. Computer chips are getting bigger and bigger. “Whenever you’re at the cutting edge of the biggest chips you can make, all the subsequent performance comes from connecting the chips together,” Harris says.
His company’s approach is cutting-edge and does not rely on traditional networking technology. Lightmatter makes the silicon photonics that link the chips together. It claims to have built the world’s fastest photonic engine for AI chips, which is essentially a 3D stack of silicon connected by light-based interconnect technology. The startup has raised more than $500 million over the past two years from investors like GV and T. Rowe Price. Last year its valuation had reached $4.4 billion.
