“20 years ago we launched the RTX graphics card, and 5 years ago we combined graphics computing with AI and launched the “Star Wars” Demo. ”
On the morning of August 8, local time in the United States, Nvidia founder and CEO Huang Renxun reviewed Nvidia’s highlight moments in graphics computing at the World Computer Graphics Conference SIGGRAPH2023, and showed the latest real-time rendering 4K resolution 3D racing chase Demo.
However, just a few minutes later, the speech of the “Leather Master” completely transitioned from 3D graphics to AI. As the most important figure in the wave of artificial intelligence, Huang Renxun has provided more powerful hardware and smarter applications and platforms for “popular generative AI”.
On the same day, Huang Renxun exposed the GH200 Grace Hopper super chip for generative AI , NVIDIA AI Workbench, and NVIDIA Omniverse upgraded through generative AI and OpenUSD .
At the same time, Huang Renxun also announced that Nvidia has cooperated with the AI open source community Hugging Face to deliver generative AI supercomputing capabilities to millions of developers to support them in building large language models (LLM) and AI applications.
“It’s still the same sentence, the more you buy, the more you save.” Even now, Huang Renxun still does not change his style of gold medal sales.
“Nuclear Bomb Factory”, coming soon
“The advent of the era of generative AI is like the moment of the iPhone.” Huang Renxun said with emotion that the journey of Nvidia’s accelerated computing and the journey of deep learning researchers met, and the big bang of modern artificial intelligence happened.
Five years ago, Nvidia redefined graphics technology by introducing AI and real-time ray tracing on the GPU. But “as we redefine computer graphics with AI, we’re also redefining GPUs for AI.”
As a result, ever more powerful systems such as the NVIDIA HGX H100, utilizing eight GPUs for a total of 1 trillion transistors, provide significantly more “accelerated computing” than CPU-based systems.
Five years later, in order to continue to promote the development of AI, NVIDIA launched the Grace Hopper super chip, the NVIDIA GH200, which combines the 72-core Grace CPU with the Hopper GPU to provide 1 EFLOPS of AI computing power and 144TB of high-speed storage. Full production in May.
Regarding the GH200, Huang Renxun took out another golden sentence in his speech, “If I can ask you to remember one thing in my speech today, that is, the future belongs to accelerated computing. The more you buy, the more you save.” many”.
The most terrifying thing about NVIDIA GH200 is not its powerful performance, but its almost crazy “scalability” .
GH200 Grace Hopper super chip platform. The platform connects multiple GPUs for complex generative workloads, including large language models, recommender systems, and vector databases. The dual configuration is said to have 3.5 times the memory capacity and 3 times the bandwidth compared to the previous generation, and the server is equipped with 144 Arm Neoverse cores, 8 petaflops of AI performance and 282GB of the latest HBM3e memory technology. Customers are expected to launch their own systems based on the platform in the second quarter of 2024.
According to Huang, at the same cost (US$100 million), a computing center composed of 2,500 GH200s is 20 times more efficient in AI computing than a traditional CPU computing center.
If thousands of dollars of GH200 are applicable to “cutting-edge large language models”, for models that have become “mainstream” , Nvidia also provides lower-cost products that can be used by ordinary people and companies.
Based on the needs of professional graphics applications, such as computer-aided design and digital content creation, Nvidia also released RTX 4000 20GB, RTX 4500 24GB and RTX 5000 32GB based on the Ada Lovelace architecture, which can provide computing performance of 26.7, 39.6, and 65.3 FP32 TFLOPS respectively.
In addition, Nvidia has also launched OVX server products equipped with L40S GPUs. Each server can hold up to eight L40S GPUs, and each GPU has 48GB of memory. For complex AI workloads with billions of parameters and multiple data modalities, the L40S can achieve 1.2x the generative AI inference performance and 1.7x the training performance compared to the A100 Tensor Core GPU.
“AI supercomputing designed for the era of generative AI”, wrote on the PPT page of GH200.
join hands with open source,
“Put & Whitney Generates AI”
In order to accelerate the customization of generative AI for various enterprises, Huang Renxun announced that Nvidia launched the “AI Workbench”.
It is said to provide developers with a unified, easy-to-use toolkit to quickly create, test and fine-tune generative AI models on a PC or workstation, and then scale them to virtually any data center, public cloud or NVIDIA DGX Cloud.
AI Workbench mainly lowers the threshold for enterprises to start AI projects. It allows developers to fine-tune models from popular libraries such as Hugging Face, GitHub, and NGC, using custom data, through a simplified access interface that runs on local systems . These models can then be shared across multiple platforms.
Businesses around the world are racing to find the right infrastructure and build generative AI models and applications, and while thousands of pre-trained models are now available, customization using the many open source tools can be challenging and time-consuming.
“In order to make this capability universal, we have to make it work almost everywhere.” Jensen Huang said, “make generative AI accessible to everyone.”
With AI Workbench, developers can customize and run generative AI with just a few clicks. It allows them to bring together all necessary enterprise-grade models, frameworks, SDKs, and libraries into a unified developer workspace .
Dell, HP, Lambda, Lenovo, and Supermicro are said to be adopting AI Workbench because it brings enterprise generative AI capabilities to wherever developers want to work, including local devices.
In his presentation, Jensen Huang showed how AI Workbench and ChatUSD bring all of these capabilities together: allowing users to start projects from a GeForce RTX 4090 laptop and scale seamlessly to a workstation or data center as projects become more complex.
According to Huang Renxun, the user can prompt the model to generate a picture of a toy Huang Renxun in space, but the results provided by the initial model are not applicable because it has never seen the toy Huang Renxun. At this time, the user can fine-tune the model with eight pictures of the toy Huang Renxun , and then enter the prompt again to get the correct result.
Then, using AI Workbench, the new model can be deployed into enterprise applications.
Huang Renxun also announced a partnership between Nvidia and Hugging Face, which has 2 million users, or will make Nvidia’s generative AI computing capabilities a tool for millions of developers to build large-scale language models and AI applications.
As part of the partnership, Hugging Face will offer a new service – “Training Cluster as a Service” (Training Cluster as a Service), powered by NVIDIA DGX Cloud, which will be available in the coming months .
Developers will be able to access NVIDIA DGX Cloud AI supercomputing within the Hugging Face platform to train and fine-tune advanced AI models. It is reported that the Hugging Face community has shared more than 250,000 models and 50,000 data sets.
” This will be a brand new service that connects the world’s largest AI community with the world’s best training and infrastructure .” Huang Renxun said.
Conversational “3D Generation”
Just this week, companies such as Nvidia, Apple, Adobe, and Autodesk jointly established the OpenUSD Alliance to push this 3D standard born in Pixar to the wider world.
By combining OpenUSD, AI, and Omniverse, designers and developers will be able to modify and create 3D environments and objects directly in natural language through a conversational interface like ChatUSD, greatly simplifying the 3D production process.
The consortium will standardize and extend OpenUSD, the open-source common scene description framework that is the foundation for interoperable 3D applications and projects spanning everything from visual effects to industrial digital twins, such as connecting film and animation pipelines, Create true and accurate real-time digital factories, warehouses, cities, and even digital replicas of the Earth and more.
Nvidia and Adobe also plan to make Adobe Firefly, Adobe’s family of creative generative AI models, available as an API in Omniverse.
It is reported that AI tools such as Cesium, Convai, Move AI, SideFX Houdini, and Wonder Dynamics are now connected to Omniverse through OpenUSD.
For example, Wonder Dynamics can automatically bring computer-generated character animation, lighting, and compositing into real-world scenes through new OpenUSD export support. Move AI can use the Move One application for single-camera motion capture, which can generate 3D character animations that can then be exported to OpenUSD and used in Omniverse.
Now, Omniverse users can build content, experiences, and applications that are compatible with other OpenUSD-based spatial computing platforms such as ARKit and RealityKit.
In addition, Huang Renxun also announced four new Omniverse Cloud APIs built by NVIDIA, including ChatUSD, RunUSD, DeepSearch, and USD-GDN Publisher, for developers to more seamlessly implement and deploy OpenUSD pipelines and applications .
Among them, ChatUSD can answer USD knowledge questions or generate Python-USD code scripts; RunUSD can convert USD files into rendered images; DeepSearch can realize semantic 3D search; USD-GDN Publisher can be used to publish high-fidelity experience based on OpenUSD, real-time streaming Transfer to web browsers and mobile devices.
“Industrial companies are racing to digitize their workflows, increasing the need for an OpenUSD-enabled, connected, interoperable 3D software ecosystem,” said Nvidia’s vice president of Omniverse and Analog Technologies. “The latest Omniverse upgrade lets developers Researchers leverage generative artificial intelligence through OpenUSD to enhance their tools, while allowing businesses to build larger, more complex simulations at a global scale that serve as digital testing grounds for their industrial applications.”
Who is currently using Omniverse? According to Huang Renxun, technology companies are using it to test and simulate collaborative robots, Amazon is using it to simulate fleets to digitize warehouses, automakers such as Mercedes are using it to simulate self-driving cars, and BMW is using it to simulate new electric car production lines. The global factory network is digitized, Deutsche Mind uses it to create a digital twin of the railway network, and even companies use it to create a digital twin of the earth, the earth’s climate system and so on.
It is reported that Nvidia is also developing a new SimReady 3D model structure. These models will include realistic material and physical properties, which are critical for accurate training of autonomous robots and vehicles. For example, an autonomous robot tasked with sorting packages needs to be trained in 3D simulations where the packages move and react when physically touched, just as they would in the real world.
Fueled by AI, the era of collaborative 3D and industrial digitalization is upon us. Huang Renxun also believes that the factory of the future will be a robot factory, “robots coordinate a large number of robots to make cars that are robots themselves” “We hope that AI can program itself.”
“In the future, the entire factory will be defined by software,” said Huang Renxun.
As the most important “infrastructure” company in this wave of “generative AI”, Nvidia’s stock has soared 200% recently, breaking through the trillion-dollar mark at one point.
In addition to selling “nuclear bomb” hardware, Nvidia also spares no effort in terms of software, cloud computing, platforms, and ecology, because only by allowing generative AI to truly enter industrial production and office can it be regarded as the real realization of “AI inclusiveness”. From this perspective, Nvidia and current AI startups, as well as traditional companies that are transitioning to generative AI, are actually in the same boat.
“Buy more, save more”, and only with “more AI”, Nvidia can guarantee itself “earn more”.