Mega Factor: Three Forces Fuse for Computing’s Million-X Leap

They deal with rising mountains of data with soaring computational requirements they cant prevail over relying entirely on Moores law, the sputtering combustion engine of the other days systems.

Sped up computing at scale with AI guarantees rapid speedups.

Theyre strapping together a trio of thrusters for the rapid acceleration they need.

Scientists are discovering the jet fuel for the million-x advances their work now demands.

Speeding Up and Scaling Out

Sped up computing is among techs three modern-day motors. Over the previous decade, its provided 1,000 x efficiency increases, thanks to advances in five generations of GPUs and the full stack of software weve built on top of them.

We established many key innovations to enable this scaling, like our Megatron software application, Magnum IO for multi-GPU and multinode processing, and SHARP for in-network computing.

Scaling is a second engine, acquiring advances of almost 100,000 x. Thats due to the fact that the information center is the new unit of computing.

For instance, in 2015 it took a single Kepler GPU almost a month to train ResNet-50, a popular computer system vision model. Today, we train that same design in less than half a minute on Selene, the worlds most effective commercial supercomputer, which loads thousands of NVIDIA Ampere architecture GPUs.

The Dawn of Deep Learning

One of the most current documents was from NVIDIA researchers. It showed a method to integrate neural networks with classical physics equations to supply 1,000 x speedups in standard simulations.

Thats why the mix of AI and high efficiency computing is sweeping through the scientific neighborhood. Researchers published nearly 5,000 papers on work in AI+HPC on arXiv in 2015, up from less than 100 5 years ago.

The 3rd and most transformational force of our time is AI.

Last year, deep learning powered a simulation of 305 million atoms over 1-millisecond timescale, revealing the inner workings of the SARS-CoV-2 infection. That work marked an over 10-millionfold boost from a then modern simulation of 1 million atoms for 20 nanoseconds 15 years earlier.

Accelerating Drug Discovery

Today, the mix of accelerated computing, massive scaling and AI is advancing science and industrial computing.

Conventional approaches using X-rays and electron microscopes have decoded only 17 percent of the approximately 25,000 human proteins. DeepMind utilized an ensemble of AI models in its AlphaFold system in 2015 to make a huge leap, anticipating the 3D structure of more than 20,000 human proteins.

No effort could be more crucial than speeding drug discovery to treat diseases. Its challenging work that requires deciphering protein structures in 3D to see how they work, then recognizing the chemical compounds that stop them from contaminating healthy cells.

Scientists at NVIDIA, Caltech and start-up Entos mixed machine learning and physics to produce OrbNet, speeding up molecular simulations by numerous orders of magnitude. Leveraging that effort, Entos can accelerate its simulations of chain reactions between protein and drug candidates 1,000 x, finishing in three hours work that would have taken more than three months.

Understanding a Changing Climate

To track clouds and storm patterns accurately, they need to work at the resolution of one meter. That needs a massive 100 billion times more computing power.

Its a similar scenario in other fields. Scientists hope to run worldwide climate simulations quickly with kilometer-scale resolution to help us adjust to altering weather patterns and better get ready for disasters.

At the speed of Moores law, we wouldnt get there until 2060. Scientists looking for millionfold leaps are building digital twins of our planet with accelerated computing and AI at scale.

Industries Spawn Digital Twins

Its one more tool sustained by todays brand-new engines of computing to enable the next millionfold leap.

Its a type of simulation innovation that guarantees more efficient farms, health centers and changes in every industry. And thats why we developed Modulus– it eases the task of creating AI-powered, physically precise simulations.

Scientists are currently utilizing these strategies to develop digital twins of cities and factories.

To get the big image, watch NVIDIA CEO Jensen Huangs GTC keynote address streaming on Nov. 9 and in replay.

Accelerated computing with AI at data center scale will provide millionfold boosts in performance to fix issues such as understanding environment modification, discovering drugs, fueling industrial improvements and far more.

Siemens Energy utilized the NVIDIA Modulus AI framework running on lots of GPUs in the cloud to simulate an entire power plant. It can predict mechanical failures from the destructive effects of steam, reducing downtime, conserving money and keeping the lights on.

Leave a Reply

Your email address will not be published.