Supermicro final Wednesday introduced one of many business’s broadest portfolios of recent GPU techniques primarily based on the NVIDIA reference structure, that includes the most recent NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The brand new modular structure is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U type elements whereas offering final flexibility and enlargement skill for present and future GPUs, DPUs, and CPUs. Supermicro’s superior liquid-cooling know-how allows very high-density configurations, corresponding to a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips built-in with a high-speed interconnect. Supermicro can ship 1000’s of rack-scale AI servers per 30 days from services worldwide and ensures Plug-and-Play compatibility.
Charles Liang, president and CEO of Supermicro
Supermicro is a acknowledged chief in driving at present’s AI revolution, remodeling knowledge facilities to ship the promise of AI to many workloads. It’s essential for us to deliver techniques which can be extremely modular, scalable, and common for quickly evolving AI applied sciences. Supermicro’s NVIDIA MGX-based options present that our building-block technique allows us to deliver the most recent techniques to market shortly and are probably the most workload-optimized within the business. By collaborating with NVIDIA, we’re serving to speed up time to marketplace for enterprises to develop new AI-enabled purposes, simplifying deployment and lowering environmental affect. The vary of recent servers incorporates the most recent business know-how optimized for AI, together with NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots.
