NVIDIA CEO Jensen Huang introduces the NVIDIA A100 Data Center GPU, which dramatically increases the performance of extension requests, whether through data research, training, or acquisitions. The A100 provides 1.5 terabytes per second of bandwidth due to its third-generation Tensor cores, a variety of GPU technologies, distributed control, and third-generation NVLink technology and NVSwitch for networking.
Based on the new build of NVIDIA Ampere, offering the largest performance leak in the company's history.
The A100 is offered at DGX A100, an integrated smart management system with 5 petaflops running in one destination and fully-accelerated software platforms. It is also available at the HGX A100 hyperscale data center from the world's leading manufacturers.
"The powerful trends in cloud computing and AI are changing the tectonic transformations of building data center so that what was once a sea of servers and CPUs is now a GPU-accelerated computer," said Jensen Huang, founder and CEO of NVIDIA. The NVIDIA A100 GPU is a jump on 20x AI machine performance and high-speed machine learning speeds, from data analysis to training and subjectivity. The NVIDIA A100 will increase performance simultaneously and reduce the cost of data centers. "
The world's leading service providers and software developers hoping to incorporate A100 GPUs into their offerings include: Alibaba Cloud, Amazon Web Services (AWS), Atos, Baidu Cloud, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google Cloud, H3C, Hewlett Packard Enterprise (HPE), Inspur, Lenovo, Microsoft Azure, Oracle, Quanta / QCT, Supermicro and Tencent Cloud.