Graphcore announces Supermicro partnership
Graphcore customers can now specify Supermicro servers as part of their IPU-POD configuration from Graphcore elite partners, following the successful qualification of the first Supermicro Ultra system.
The AS-1124US-TNRP, which is the first of Supermicro’s high performance enterprise-class Ultra servers to be approved for use in IPU-POD systems, features the latest third-generation AMD EPYC (TM) processors.
As part of a Graphcore IPU-POD, Supermicro Ultra servers will aid designers when it comes to machine intelligence, helping them to develop and deploy state-of-the-art models, as well as accelerating widely used AI applications.
“Supermicro is one of the most trusted names in the business and its high-performance Ultra servers are the perfect complement to Graphcore’s made-for-AI Intelligence Processing Unit and scale-out systems,” explained Graphcore’s Tom Wilson.
IPU-M2000s and host servers within an IPU-POD system can be configured in different ratios, helping to optimise TCO around the varying server requirements of specific machine intelligence workloads. Server-intense applications such as computer vision can benefit from a higher server-to-IPU ratio than natural language processing workloads, for example.
“Graphcore has really thought through the architecture of its IPU-POD data centre systems, and how to get the best out of different servers for different AI workloads,” said Raju Penumatcha, SVP and Chief Product Officer at Supermicro. “We expect that Supermicro’s Ultra and other servers, used in conjunction with features like variable IPU-to-server ratio will deliver incredible results.”
IPU-POD is Graphcore’s scale-out machine intelligence solution, based around multiple instances of the IPU-M2000 – the 1 PetaFlop, 1U data center AI blade – plus a range of approved host servers and switches.
IPU-Fabric provides high-bandwidth connectivity, with compiled communications and compute managed by the Poplar software platform. IPU-PODs are currently available as POD4, POD16 and POD64.
The IPU-POD16 Direct Attach (DA) which features 4 IPU-M2000s directly attached to a host server is ideal for AI engineers getting started with IPU evaluation, proof of concept development and pilot. The IPU-POD64 features 16 IPU-M2000s, 1-4 host servers and 2 switches, and is ideal for scale out of larger models and for production workloads.
As additional Supermicro servers are qualified for IPU-POD systems they will be added to Graphcore’s approved server list available on the Graphcore website.