Enterprises trying to come to grips with understanding what they can gain from generative AI and how they can implement the technology can now leverage the capabilities of a new partnership involving two companies with expertise in both enterprise issues and generative AI.
Nvidia has teamed up with enterprise cloud computing firm ServiceNow with the aim of developing “domain-specific generative AI models for various functions of enterprises using Nvidia's foundational model as a starting point running on Nvidia GPUs,” said Rama Akkiraju, vice president of AI for IT at ServiceNow, during a media briefing session. The live announcement took place at ServiceNow’s Knowledge23 event in Las Vegas. “So this is really building customized, fine-tuned generative AI models for enterprises.”
These enterprise-grade generative AI capabilities can be applied to accelerating and transforming workflows for enterprise business processes. The partnership will leverage NeMo large language models and other AI software tools and accelerated processor infrastructure to expand ServiceNow’s existing AI functionality into the realm of generative AI, the companies said.
ServiceNow is also helping Nvidia streamline its IT operations with these generative AI tools, using Nvidia data to customize the NeMo foundation models running on hybrid-cloud infrastructure consisting of Nvidia DGX Cloud and on-premises Nvidia DGX SuperPOD AI supercomputers, the companies said.
Akkiraju added, “Generative AI is here to play a very critical role in enterprises. It's pretty much on every business leaders mind these days.”
Referencing an Accenture report that suggested 98% of enterprise leaders see foundational AI models playing an important role in their organizations in the next three to five years, she further stated, “Every business leader is actively building proof of concept and trying to find out what the capabilities of these technologies apply them to see what it brings to their enterprise use cases.”
These use cases include human resources activities and other internal applications, as well as customer-facing use cases, like the development of intelligent virtual assistants and agents to answer user questions and support requests with purpose-built AI chatbots that use large language models and focus on defined IT tasks.
While generative AI models have come a long way in a very short time, Akkiraju pointed out that most of what they have learned is based on public domain knowledge. For the technology to be successfully leveraged in the enterprise realm, it needs to learn for enterprise data and content, and be customized to address specific enterprise needs.
“For example, if I ask a generative AI model, about how to connect to VPN in a company as a new employee, it won't be able to answer that question accurately,” she said. “To bring generative AI to enterprises, we must customize these foundational models to teach them the language of the enterprise and enterprise-specific skills, so that they can provide more in-domain responses and with the proper guardrails.” (A reference to Nvidia’s recently-announced NeMo Guardrails for keeping generative AI on track and away from topics and data governed by privacy rules and other restrictions. ServiceNow also is leveraging the NeMo Guardrails solution as part of the partnership.)