
Big Tech’s AI-fueled memory shortage is going to be the defining story of the PC industry for 2026 and beyond. Standalone, direct-to-consumer RAM kits were some of the first products to see their prices increase by 300 or 400 percent by the end of 2025; SSD prices have also increased significantly, albeit more modestly.
The remainder of 2026 will be about where, how and to what extent those price spikes flow into computers, phones and other components that use RAM and NAND chips – areas where existing supplies of products and long-term supply contracts negotiated by the big companies have so far helped keep prices from rising too much.
This week, we’re seeing signs that the RAM shortage is starting to impact the GPU market – Asus caused a bit of a stir when it unsurprisingly announced it was discontinuing its GeForce RTX 5070 Ti.
Although the company has since tried to retract this announcement, if you’re a GPU maker, there’s a strong argument for discontinuing this model or prioritizing it in favor of other GPUs. The 5070 Ti uses 16GB of GDDR7, as well as a partially disabled version of Nvidia’s GB203 GPU silicon. It’s the same chip and uses the same amount of RAM as the higher-end RTX 5080 – the thinking is, why continue to make a graphics card with an MSRP of $749, when the same core parts can go into a card with a $999 MSRP?
Whether or not Asus or another company is canceling production, you can see why GPU makers would be attracted to this argument: Street prices for RTX 5070 Ti models right now start in the $1,050 to $1,100 range on Newegg, where RTX 5080 cards start in the $1,500 to $1,600 range. Although the 5080 model may require a more robust board, heatsink, and other components than the 5070 Ti, if you Now! In trying to maximize profit per GPU for the same amount of RAM, it makes sense to shift the allocation to more expensive cards.
<a href