During a two-hour conference, NVIDIA launched its betting for AI and in particular for data centers. What got the most weight in this presentation were not only the products for AI, but also the implementation of Mellanox DPUs in the company portfolio and a large amount of AI-based software. by the company for various different applications such as transportation, medicine, manufacturing. In general, this is a presentation that places the future of NVIDIA not in the PC game, but in the whole world of AI and Big Data and where the PC is only a small part of the game. whole ecosystem.
NVIDIA sets Omniverse in GTC 2021
A few years ago, reality simulators like Second Life appeared, which were extremely rudimentary for those that can be created today, we can also talk about games like Minecraft which despite their games have a great ability to simulate worlds virtual.
Imagine for a moment such a simulation running on dozens if not hundreds of GPUs with the ability to act as a real world twin and simulate the interaction between objects, simulating from the simplest systems to complex systems. Imagine, for example, simulating a car not as a part but as a system made up of its various parts, which interact with each other to create the complex system that is the car itself.
The usefulness of Omniverse? The idea is to use simulation to create environments in which computer vision-based AIs can be trained through virtual environments. Which is ideal for example for creating artificial intelligence models for automatic driving.
The enormous combined power of having dozens if not hundreds of GPUs enables the creation of ultra-realistic virtual environments for Omniverse, which can be used for the production of audiovisual content such as commercials without having to be on site and even being able to make variations thanks to an advanced simulation.
The purchase of Mellanox brought DPUs, data processing units, into the NVIDIA portfolio. They are nothing more than SmartNICs and therefore advanced network controllers. His work? Free the CPU and GPUs from the complex data transport and memory access tasks to be performed in this type of specialized processor.
Where NVIDIA first deployed the Bluefield processors is in its cloud service, GeForce Now, this allows them to upload certain tasks from the CPU and GPU to the BlueField DPU in order to reduce the latency of their Cloud Gaming service. . But NVIDIA has pretty ambitious plans for its line of DPUs, as it intends to integrate them with an ARM processor based on A78 cores. Curiously the same as those used by the future Tegra Orin for 2023 and later to integrate the whole package with a GPU.
In this GTC 2021, NVIDIA didn’t show anything on Bluefield 4, but he spoke at the end of a new NVIDIA Tegra called Atlan, you don’t have to be very smart to see how the Blufield and Tegra ranges will eventually be incorporated into one of the 2024 Visualization’s description that NVIDIA gave of the Tegra Atlan.
NVIDIA renews its DGX and A100 in this GTC 2021
NVIDIA has not presented any graphics architecture, but has renewed its DGX computers, which bring with them several NVIDIA A100 GPUs, based on the Ampere architecture for high performance computing, which should not be confused with that of the RTX 3000, because they are they differ on several points.
The novelty? Now instead of 40GB of HBM2 memory for A100, these have 80GB, so their VRAM storage capacity has doubled.
On the other hand, NVIDIA has created a new product line called Aerial A100 which combines a Bluefield processor with an A100 to create a kind of advanced standard for 5G communications. This is a new line of products from NVIDIA born out of the purchase of Mellanox and which we have yet to see how it will develop in the future, but it shows the interest of NVIDIA in penetrating new markets, especially in Big Data where large amounts of information is processed and this is related to artificial intelligence where large amounts of data are processed, where most of it passes through 5G networks.
NVIDIA Grace, NVIDIA’s ARM processor for HPC
This is not the first time that AMD has developed a CPU based on the ISA ARM, we have already seen several of their attempts on several NVIDIA Tegra. The first attempt was on the Tegra K1 with Denver, Tegra X2 received Denver 2 and Tegra Xavier, the most recent of the Tegra SoCs, received the Carmel architecture as the CPU. The big difference is that this time, Grace will not be part of an SoC, but rather a high-caliber stand-alone CPU that NVIDIA will use to build its servers for 2023. The Grace date will appear in the market.
But what specs does Grace promise to have? As of yet, they haven’t given any information on how many cores it will have, but we can be sure it will be dozens and could even exceed a hundred. How do we know? Well, due to the fact that its interfaces with memory, GPU, and other processors are really massive.
It is striking to see an LPDDR5X interface of more than 500 Gb / s which indicates that it is a big processor, because in order to reach this level of power, an interface with a very large memory is required and therefore a perimeter on the huge chip. To this must be added a bandwidth via NVLink 4 of 900 Gb / s with the GPU and 600 Gb / s with another Grace CPU, which also indicate a large chip.
The power of the cores? All NVIDIA limited itself to saying is that each of them exceeds 300 points in the SPECrate2017_int-base benchmark. At the moment, it is too early and NVIDIA has not given all the details of its CPU for the servers.