Jensen Huang answers questions about the company’s broad AI vision on display at this week’s GTC developer virtual event

Against the backdrop of this week’s GTC event, Nvidia founder and CEO Jensen Huang delivered a keynote address on Tuesday that framed artificial intelligence (AI) as a primary industrial force for future economic growth. 

AI is driving fundamental changes in the way data centers are being built, Huang asserted. 

“Raw data comes in, is refined, and intelligence goes out — companies are manufacturing intelligence and operating giant AI factories,” he said.

In the course of a nearly two hour keynote address on Tuesday, Huang presented dozens of new Nvidia products and services designed to fill this space, starting with new CPUs, GPUs, networking gear, and data-center scale systems for AI and high-performance computing (HPC). Those include the Hopper architecture to succeed the company Ampere, the backbone of its DGX A100 data center server. 

The new Hopper chip architecture replaces the company’s popular Ampere line, the basis of its DGX A100 AI clusters. The H100 is the “new engine of the world’s AI infrastructures,” said Huang.

“Hopper is not just a faster Ampere. Hopper adds new capabilities that Ampere could not solve,” Huang told reporters in a Q&A on Wednesday.

Hopper scales much more efficiently than Ampere, he explained. Hopper is especially optimized for transformers, a popular neural network design, offering many orders of magnitude more efficient processing, Huang said. 

The new H100 chip is manufactured by TSMC using a 4 nanometer (nm) process which manages to squeeze 80 billion transistors per chip. Huang said that the H100 provides 9x at-scale training performance over A100 and 30x large-language-model inference throughput, the biggest generational leap ever. Hopper is the basis of a suite of new GPU-based AI supercomputers from Nvidia: DGX H100, H100 DGX POD and DGX SuperPOD. 

Nvidia showed concepts of how the new technology can work, including a remarkable demo demonstrating natural language translation with real-time rendering to mimic accurate facial movements. While enough of an uncanny valley remained to assure viewers this was in the realm of simulation, it raises concerns in an era when misinformation abounds. Huang was questioned by a reporter on Wednesday about the ethic problems presented if AI is used in the service of “deepfakes” and related efforts.

Huang observed that while AI can be abused in the wrong hands, it also makes the detection of such technology possible. Ultimately, though, Huang seems comfortable for the rest of the world to work out that particular double-edged sword while Nvidia hones the blade.

Bringing Omniverse to the cloud

Nvidia’s 3D simulation platform Omniverse takes a central role in Nvidia’s vision as the way AI will learn about the real world. 

Huang said Omniverse is where AI will be able to train for real-world deployment into autonomous vehicles and other objects. Meticulous environmental mapping and millions of repeated simulations yields AI really to handle real-world complexities more elegantly. 

Omniverse is the virtual proving ground and central backbone for Nvidia’s full-stack robotics and autonomous vehicle platforms and other data center technology – including Omniverse-focused data center clusters and supercomputers.

Huang explained Omniverse’s expanded role in his vision of Nvidia and AI to reporters on Wednesday. It was born out of practical necessity during the pandemic. Huang said that Nvidia needed Omniverse to stay efficient and productive during lockdowns.

The new Omniverse Cloud scales the concept to a full-blown service platform built around collaborative 3D environments. The service is currently under development but Nvidia is accepting applications for early access on its website.

The post How Nvidia’s Hopper powers AI innovation – CEO appeared first on RCR Wireless News.