Nvidia is utilizing misleading practices and abusing its market dominance to quash the competitors, in keeping with Cerebras Programs CEO Andrew Feldman, after the agency unexpectedly introduced its newest GPU product roadmap in October 2023.
Nvidia outlined new graphics playing cards set for annual launch between 2024 and 2026 so as to add to the business main A100 and H100 GPUs presently in such excessive demand, with organizations throughout the business sphere swallowing them up for generative AI workloads.
However Feldman labelled this information a “predetary pre-announcement” talking to HPCWire, highlighting the agency has no obligation to see by on releasing any of the parts it’s teased. By doing this, he’s speculated it’s solely confused the market, particularly in gentle of the very fact Nvidia was, say, a yr late with the H100 GPU. And he doubts Nvidia can see by on this technique, nor would possibly it wish to.
Nvidia is simply ‘throwing sand up within the air’
Nvidia teased yearly leaps on a single structure in its announcement, with the Hopper Subsequent following the Hpper GPU in 2024, adopted by the Ada Lovelace-Subsequent GPU, a successor to the Ada Lovelace graphics card, set for launch in 2025.
“Firms have been making chips for a very long time, and no person has ever been capable of succeed on a one-year cadence as a result of the fabs don’t change at a one-year tempo, Feldman countered to HPCWire.
“In some ways, it has been a horrible block of time for Nvidia. Stability AI stated they had been going to go on Intel. Amazon stated the Anthropic was going to run on them. We introduced a monstrous deal that will produce sufficient compute so it might be clear that you can construct… giant clusters with us.
“[Nvidia’s] response, not stunning to me, within the technique realm, shouldn’t be a greater product. It’s… throw sand up within the air and transfer your palms loads. And you realize, Nvidia was a yr late with the H100.”
Feldman has designed the world’s largest AI chip on this planet, the Cerebras Wafer-Scale Engine 2 CPU – which is 46,226 square-mm and comprises 2.6 trillion transistors throughout 850,000 cores.
He informed the New Yorker that huge chips are higher than smaller ones as a result of cores talk sooner after they’re on the identical chip somewhat than being scattered throughout a server room.
