Nvidia Ideas New Volta Structure for Supercomputer GPUs


The Tesla V100 is the primary Volta-based GPU, which can quickly discover its strategy to the unreal intelligence and machine studying cloud platforms of Microsoft and Amazon.


Nvidia Tesla V100

SAN JOSE, Calif.—Nvidia refreshed its lineup of GPUs for deep studying and synthetic intelligence functions on Wednesday with the brand new 5,120-core, 7.5 teraflop Tesla V100 Volta.

The brand new processor is a part of Nvidia’s quest to give you a brand new strategy to constantly enhance computing efficiency within the aftermath of Moore’s Regulation, which many trade leaders agree is just about lifeless. As an alternative of boosting processor speeds or cramming extra transistors onto already-crowded silicon, Nvidia is championing GPU-accelerated computing, which the corporate’s CEO Jensen Huang (pictured above) stated can supply a 150 p.c efficiency increase yearly.

The Tesla V100, with a brand-new structure known as “Volta,” represents that newest increase. It is the “subsequent large leap into the brand new world” of AI and high-performance computing, Huang stated at Nvidia’s builders convention right here. The V100 will begin delivery by the top of the 12 months to information facilities owned by Amazon, Microsoft, and different cloud computing providers in a number of totally different configurations.

Nvidia Tesla V100

The commonest off-the-shelf model of the V100 is a $149,000 supercomputer known as the DGX-1, which incorporates eight V100 processors. There’s additionally a $69,000, liquid-cooled mini supercomputer known as the DGX geared toward researchers who aren’t utilizing the cloud, which is powered by 4 V100s.

The DGX-1 made its debut final 12 months with the V100’s predecessor, the Tesla P100. Due to the Volta structure, the V100-powered supercomputers supply 5 instances extra peak teraflops enchancment over Pascal chips, the current-generation Nvidia GPU structure, and 15 instances greater than the Maxwell structure provides.

The V100 will energy servers within the Microsoft Azure cloud and Amazon Web Services, that are the 2 titans of cloud choices for deep studying. As soon as put in, they will help software program like Microsoft’s Cognitive Toolkit and Mxnet, an open-source deep studying framework that’s Amazon’s most well-liked AI framework. Nvidia’s V100 can even probably energy Fb’s Caffe2 framework.

“Our technique is to create the most efficient platform for deep studying,” Jensen stated. He supplied a number of demonstrations of the P100’s potential functions, from an art work algorithm that may merge the colour palette of 1 photograph (an orange sundown, as an illustration) with the topic of one other (a seaside and clouds, as an illustration) into a completely new picture.

In fact, the chief advantage of internet hosting AI processing energy within the cloud is that engineers can do just about no matter they need with it. To that finish, Nvidia additionally plans to launch its personal GPU Cloud service, at the moment in an invite-only beta, which permits anybody to add their machine algorithms to Volta servers.

“Probably the greatest methods to get pleasure from deep studying is for another person to construct this extremely sophisticated supercomputer in your behalf,” Jensen stated.

Top