Nvidia has unveiled the world’s 22nd fastest supercomputer – DGX SuperPOD – which it says provides AI infrastructure that meets the massive demands of the company’s autonomous-vehicle deployment programme.

The system was built in just three weeks with 96 Nvidia DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles.

Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design.

AI training of self-driving cars is the ultimate compute-intensive challenge.

A single data-collection vehicle generates 1TB of data per hour. Multiply that by years of driving over an entire fleet, and you quickly get to petabytes of data. That data is used to train algorithms on the rules of the road – and to find potential failures in the deep neural networks operating in the vehicle, which are then re-trained in a continuous loop.

“AI leadership demands leadership in compute infrastructure,” said Clement Farabet, vice president of AI infrastructure at Nvidia. “Few AI challenges are as demanding as training autonomous vehicles, which requires retraining neural networks tens of thousands of times to meet extreme accuracy needs. There’s no substitute for massive processing capability like that of the DGX SuperPOD.”

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Powered by 1,536 Nvidia V100 Tensor Core GPUs interconnected with Nvidia NVSwitch and Mellanox network fabric, the DGX SuperPOD can tackle data with peerless performance for a supercomputer its size.

The system is hard at work around the clock, optimising autonomous driving software and retraining neural networks at a much faster turnaround time than previously possible.
For example, the DGX SuperPOD hardware and software platform takes less than two minutes to train ResNet-50. When this AI model came out in 2015, it took 25 days to train on the then state-of-the-art system, a single Nvidia K80 GPU. DGX SuperPOD delivers results that are 18,000x faster.

While other TOP500 systems with similar performance levels are built from thousands of servers, DGX SuperPOD takes a fraction of the space, roughly 400x smaller than its ranked neighbours.

And Nvidia DGX systems have already been adopted by other organisations with massive computational needs of their own – ranging from automotive companies such as BMW, Continental, Ford and Zenuity to enterprises including Facebook, Microsoft and Fujifilm, as well as research leaders like Riken and US Department of Energy national labs.