MLPerf HPC Benchmarks Show the Power of HPC+AI 

NVIDIA-powered systems won 4 of 5 tests in MLPerf HPC 1.0, an industry criteria for AI performance on clinical applications in high efficiency computing.

Recent advances in molecular characteristics, astronomy and environment simulation all used HPC+AI to make clinical advancements. Its a pattern driving the adoption of exascale AI for users in both science and market.

Theyre the current results from MLPerf, a set of market criteria for deep knowing very first launched in May 2018. MLPerf HPC addresses a design of computing that speeds and augments simulations on supercomputers with AI.

What the Benchmarks Measure

In the weak-scaling category, we led DeepCAM utilizing 16 nodes per task and 256 synchronised tasks. All our trial run on NVIDIA Selene (envisioned above), our in-house system and the worlds largest commercial supercomputer.

Each test has 2 parts. A step of how fast a system trains a model is called strong scaling. Its equivalent, weak scaling, is a step of maximum system throughput, that is, the number of models a system can train in a provided time.

Compared to the very best lead to strong scaling from last years MLPerf 0.7 round, NVIDIA provided 5x much better results for CosmoFlow. In DeepCAM, we delivered nearly 7x more performance.

CosmoFlow estimates information of objects in images from telescopes.
DeepCAM tests detection of typhoons and atmospheric rivers in environment information.
OpenCatalyst tracks how well systems predict forces among atoms in molecules.

The Perlmutter Phase 1 system at Lawrence Berkeley National Lab led in strong scaling in the OpenCatalyst standard using 512 of its 6,144 NVIDIA A100 Tensor Core GPUs.

MLPerf HPC 1.0 determined training of AI models in three normal work for HPC.

NVIDIA delivered leadership results in both the speed of training a design and per-chip performance.

The current outcomes show another dimension of the NVIDIA AI platform and its efficiency management. It marks the eighth straight time NVIDIA delivered top ratings in MLPerf criteria that span AI training and reasoning in the data center, the cloud and the networks edge.

A Broad Ecosystem

7 of the 8 individuals in this round sent outcomes using NVIDIA GPUs.

They consist of the Jülich Supercomputing Centre in Germany, the Swiss National Supercomputing Centre and, in the U.S., the Argonne and Lawrence Berkeley National Laboratories, the National Center for Supercomputing Applications and the Texas Advanced Computing Center.

” With the benchmark test, we have actually shown that our machine can unfold its capacity in practice and contribute to keeping Europe on the ball when it concerns AI,” stated Thomas Lippert, director of the Jülich Supercomputing Centre in a blog.

The MLPerf standards are backed by MLCommons, an industry group led by Alibaba, Google, Intel, Meta, NVIDIA and others.

How We Did It

We likewise applied NVIDIA SHARP, a crucial part within NVIDIA MagnumIO. It supplies in-network computing to accelerate communications and unload data operations to the NVIDIA Quantum InfiniBand switch.

In this round, we tuned our code with tools readily available to everyone, such as NVIDIA DALI to speed up information processing and CUDA Graphs to minimize small-batch latency for efficiently scaling approximately 1,024 or more GPUs.

For a much deeper dive into how we utilized these tools see our designer blog.

All the software we utilized for our submissions is offered from the MLPerf repository. We routinely include such code to the NGC brochure, our software application hub for pretrained AI models, market application structures, GPU applications and other software resources.

The strong showing is the outcome of a mature NVIDIA AI platform that includes a full stack of software application.

Leave a Reply

Your email address will not be published.