AI

Red Hat integrates Nvidia NIMs into OpenShift AI

Red Hat is deepening its AI partnership with Nvidia, announcing at the Red Hat Summit 2024 this week in Denver that it is working on integrating Nvidia’s NIM AI inference microservices into Red Hat OpenShift AI to help speed up generative AI application development for OpenShift users.

The announcement comes after Nvidia announced the NIM set of inference microservices for faster access to different AI models at its Nvidia GTC event back in April. At that time, companies such as ServiceNow and SAP were announced as partners, and Red Hat also indicated it would support access to Nvidia NIMs for organizations using OpenShift to deliver their generative AI applications.

Now, as part of this latest collaboration, Nvidia is enabling NIM interoperability with KServe, an open source project based on Kubernetes for highly scalable AI use cases and a core upstream contributor for Red Hat OpenShift AI, according to Red Hat, which added that this will help fuel continuous interoperability for NIM microservices within future iterations of OpenShift AI.

The companies said benefits if the integration include:

  • Streamlined path to integration to deploy Nvidia NIM in a common workflow alongside other AI deployments for greater consistency and easier management.

  • Integrated scaling and monitoring for Nvidia NIM deployments in coordination with other AI model deployments across hybrid cloud environments.

  • Enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their business on AI.

Without this kind of integration it would be challenging and time-consuming for organizations to incorporate different AI models into the application development, and that challenge will only grow as even more models are introduced, according to Justin Boitano, vice president, Enterprise Products, NVIDIA.

“Every enterprise development team wants to get their generative AI applications into production as quickly and securely as possible. Integrating Nvidia NIM in Red Hat OpenShift AI marks a new milestone in our collaboration as it will help developers rapidly build and scale modern enterprise applications using performance-optimized foundation and embedding models across any cloud or data center.”

Steven Huels, vice president and general manager, AI Business Unit, Red Hat, added, “Even in the last nine months the variety and the volume of models that are being released into the market, not even including all the variants of different sizes with different performance criteria, has become staggering and… almost untenable for customers to be able to keep pace with.”

Huels continued, “The beauty of what we’re doing for those we are going to be offering Nvidia NIM from within openshift AI so customers will have the ability from within OpenShift AI to easily deploy in them. So, click a couple of buttons, and you can select from any of the Nvidia NIMs and deploy them into your OpenShift AI footprint, and from that interface scale and manage them alongside any other intelligent applications or models you’re running.”