Nvidia, Google Cloud to open AI-on-5G test lab

Nvidia today announced at Mobile World Congress in Barcelona, Spain, that it will work with Google Cloud to establish an AI-on-5G Innovation Lab to help accelerate the development and deployment of AI-based solutions for enterprises, smart cities and smart factories, among others.

The move comes two months after Nvidia announced it would work with multiple partners, including Google Cloud, to drive AI edge applications via 5G networks. As part of the partnership, Google is extending its service-centric Anthos application platform, which uses Nvidia GPU-accelerated server technology for enterprise-grade, containerized application development with managed Kubernetes, to the network edge. Anthos also will help in the development and management of outcome-based application policies, which the companies described as critical to the future of AI-on-5G.

With the new lab, the partners will take the next step toward driving AI application deployment by giving 5G network operators, and their infrastructure and AI software partners an environment in which to develop, test and adopt new solutions for different customer environments.

Ronnie Vasishta, senior vice president of telecom for Nvidia, said, “We’re going to go through a process where enterprise edge and AI platforms will be implemented on cloud-native infrastructure, and they will be orchestrated and managed in exactly the same way as the cloud today. That means AI edge infrastructure is going to start to look a lot more like the data center.”

Vasishta added, “The addition of the AI-on-5G solution brings new ecosystem partners into the already vibrant AI ecosystem,” while Nvidia’s 5G RAN software stack enables control of AI-driven elements in an enterprise, such as robots or smart retail systems.

Alan Weckel, technology analyst at 650 Group, said that the new lab will help Nvidia and Google in the service provider market, while further developers' efforts and improving consumer and enterprise applications. "Nvidia is trying to expand into multiple SP use cases as is Google," he stated in an email. "So this should end up benefiting consumer applications and broader enterprise/smart city deployments. Areas like manufacturing in Europe, etc. The lab will help developers build applications faster and consistently. By developing on Google Cloud, a developer benefits from access to most Telco SPs instead of building an application unique to each Telco SP. Customers, both enterprise and consumer, will benefit the most with a consistent application experience across SPs. By having a broad lab and inviting many application developers to it, you also get the benefit of developers learning from each other which will help provide better applications."

Vasishta did not disclose where the new lab will be located or when it will open, though the companies will be working on the project in the coming months. In addition to testing for infrastructure players and their partners, the new lab will provide enterprises with access to the Anthos platform and Nvidia accelerated computing hardware and software platforms.

Nvidia’s AI-on-5G platform enables a foundation for both 5G and edge AI computing and applications using the Nvidia Aerial software development kit with the Nvidia BlueField-2 A100 — a converged card that combines GPUs and DPUs as well as Nvidia’s “5T for 5G” solution. The company also announced this week that the next generation of its Aerial A100 AI-on-5G computing platform will incorporate 16 Arm-based CPU cores into the BlueField-3 A100 to create a more a self-contained, converged card that delivers enterprise edge AI applications over cloud-native 5G vRAN with improved performance per watt and faster time to deployment[.

Vasishta said Nvidia has a roadmap for AI-on-5G that takes the Aerial hardware from a server-based solution to a card to a chip over the next few years.

Boosting AI on HPC

In a separate announcement connected to another event this week--ISC High Performance 2021—Nvidia announced it is beefing up its HGX AI high-performance computing (HPC) platform with new technologies that broaden the appeal of HPC to more vertical industries outside of the typical users from academia.

Those new components include the Nvidia A100 80GB PCIe GPU, Nvidia NDR 400G InfiniBand networking, and Nvidia Magnum IOTM GPUDirect Storage software. 

RELATED: Intel busts out networking gear, software pre-MWC