Nvidia Announces a new ARM CPU and More GPUs for Deep Learning

This is a quick article to discuss what was announced at Nvidia’s GTC Keynote. There is a lot to cover, but I am only going to focus on some key topics.

Gaming GPUs

First, let’s talk about GPUs. There were no new gaming GPUs announced and there is no successor yet to the Ampere architecture.

But, Nvidia did release new professional GPUs for Deep Learning. For the desktop, the Nvidia RTX A4000 with 16 GB of GPU memory and the Nvidia RTX A5000 with 24 GB of GPU memory. Also, Nvidia released new professional GPUs for laptops. Quite a few actually.

Nvidia Roadmap for their CPU, GPU and DPUs

Finally, Nvidia gave us some insight into their roadmap for the next few years. It seems that Nvidia is now working on 2-year iterations for each of their product lines(GPU, CPU, and DPU). So we should expect sometime in 2022 a successor to the Ampere architecture.

Nvidia DGX Station A100

For the next bit of news, if you are still trying to buy a GPU, please look away. You might find this news a bit annoying.

Back in November, Nvidia announced a new iteration of their DGX A100 Station, a workstation specialized in running Deep Learning workloads.

It has an AMD Epyc CPU with 64 cores, 128 threads, and 512 GB of memory.
And how about 4 x A100 GPUs with 80 GB of GPU memory each for a grand total of 320GB VRAM?

And forget about liquid cooling. How about refrigerated liquid cooling? There should hopefully be enough space to keep our drinks cool.

Better Performance

Compared to its predecessor it is 3x faster when training and over 4x faster for inferencing.

What is amazing is that despite all this power in one workstation, you can still plug this station into a home outlet. Probably nothing else at the same time.

Price

Just in case you want to put your hands on this workstation and you can’t afford the $149000 starting price, be glad, you can also rent it for the humble sum of $9000 USD/month.

Nvidia Releasing their own CPU – Codename Grace

Now the really exciting news, the last piece of the puzzle in Nvidia’s words. Well, maybe not so exciting for Intel…. or for AMD/

Nvidia is developing its own ARM-based CPU, codename Grace, in honour of Computing Scientist pioneer Grace Hopper. It is a CPU specialized in Deep Learning.

When it is launched, sometime in 2023, it is expected to be over 10x times faster than today’s state-of-the-art Nvidia DGX workstations, which currently run on x86.

Nvidia launched Nvidia Omniverse Enterprise

Let’s now talk about Nvidia Omniverse, which was a major theme in the keynote.

Imagine the power of creating a virtual world that obeys the laws of physics, where people are just like real people, going about their business, where cars behave like real cars. A virtual world in which you can simulate real weather like rain, snow and vary the conditions as you wish. Where everything is simulated to a degree where it is pretty much real, except that it isn’t!

What is Nvidia Omniverse?

So what is Nvidia Omniverse? Think of it as Minecraft for enterprises. Or a game engine for the real world. Or the Matrix. In Minecraft, you can create, interact and share a virtual world with friends. And Nvidia Omvniverse’s goal is the same, except it is not only for fun.

What Omniverse does is to visualize and simulate a virtual world, which includes realistic physics and realistic materials using 3D assets, textures, point clouds, created in a number of 3D modeling applications. And, by the way, Blender is included!

What can you do with Nvidia Omniverse?

Nvidia also showed some interesting use cases and the one I am going to talk about is the BMW digital car factory. With Nvidia Omniverse, BMW has created a digital twin of their latest car factory, that looks quite impressive. And in which BMW can easily make virtual reconfiguration changes to the factory in preparation for new vehicle launches which normally require layout changes.

It is possible to interact with the virtual factory using a motion capture suit, and in this way, BMW can test ergonomics, efficiency, and safety hands-on.

But you might say, that’s nice but a real factory has hundreds of people working at the same time. That doesn’t really scale.

Digital Humans in Nvidia Omniverse

For scale, the digital factory doesn’t need real people. It has digital factory workers, obviously. Or should we call them bots? 

Nvidia has developed AI that can teach digital humans using Reinforcement learning to perform tasks in this virtual factory, using data collected from real factory workers.

At this stage, I am not sure how they capture this data, but it is likely to be using the same motion tracking suits that are also used to interact with the virtual omniverse world.

Nvidia Omniverse uses Universal Scene Description – Developed by PIXAR

The glue that makes Nvidia Omniverse possible is actually not something invented by Nvidia at all. From all places, it comes from Hollywood. 

Example of a USD file

Pixar created some years ago the Universal Scene Description file format, which is software agnostic, which to me looks very similar to a Java class, as a way to describe an imaginary world where all kinds of 3D assets, lighting, are glued together in one Universal Scene, representing an animation scene in a movie.

And that is it for now.

Happy Coding!


Posted

in

, ,

by