JP Morgan says Nvidia is gearing up to sell entire AI servers instead of just AI GPUs and components — Jensen’s master plan of vertical integration will boost Nvidia profits, purportedly starting with Vera Rubin

The launch of Nvidia’s Vera Rubin platform for AI and HPC next year could mark significant changes in the AI ​​hardware supply chain as Nvidia plans to ship its partners with fully assembled Level-10 (L10) VR200 compute trays with all compute hardware, cooling systems, and interfaces pre-installed, according to JPMorgan (via @Jukanlosreve). The move will cause major ODMs to do much less design or integration work, making their lives easier, but will also reduce their margins in Nvidia’s favor. Information remains informal at this stage.

Starting with the VR200 platform, Nvidia is reportedly preparing to take over production of fully built L10 compute trays with Vera CPUs, Rubin GPUs, and a cooling system pre-installed, rather than allowing hyperscalers and ODM partners to build their own motherboards and cooling solutions. This wouldn’t be the first time the company has supplied partially integrated server sub-assemblies to its partners: it did so with its GB200 platform when it supplied the entire Bianca board with key components already installed. However, at the time, this could have been considered an L7 – L8 integration, while now the company is reportedly considering moving up to L10, with the entire tray assembly – accelerator, CPU, memory, NIC, power-delivery hardware, midplane interface and liquid-cooling cold plates – being sold as a pre-built, tested module.



Leave a Comment