AI

Nvidia's latest microservices keep AI agents safe and on task

As the agentic AI era appears poised for take-off, Nvidia has unveiled new control and safety tools for organizations that want to be sure their AI agents stay on topic and on task as they engage with users.

These tools take the form of three new NIM microservices for NeMo Guardrails, offerings which leverage the NeMo Guardrails software that Nvidia rolled out in early 2023. Since that launch, organizations that had been focused on AI training have been getting closer to take the next step–developing their own AI agents. But, concerns about potential AI safety, security, and privacy breaches have persisted. Meanwhile, Nvidia launched its first NIM microservices last year to help organizations accelerate AI adoption and application development and the new tools are the latest additions to the growing family.

The new microservices include:

● A content safety NIM microservice that safeguards AI against generating biased or harmful outputs, ensuring responses align with ethical standards.

● A topic control NIM microservice that keeps conversations focused on approved topics, avoiding digression or inappropriate content.

● A jailbreak detection NIM microservice that adds protection against jailbreak attempts, helping maintain AI integrity in adversarial scenarios.

During a briefing on the launch, Kari Briski, vice president of AI models, software and services, at Nvidia, referred to a Capgemini report that suggested that more than 80% of organizations plan to adopt AI agents within the next three years. “You don't just build an agent for a task,” she said. “You must also evaluate AI agent security, data privacy and regulation requirements, and that can be a major barrier to deployment. Also… agents must also be performant. They need to respond quickly and utilize infrastructure efficiently. So, the key is that we need to keep AI agents on track while also making sure that they're fast and responsive enough to interact with other AI agents and also end users.”

But as countries around the world weigh different forms of regulation for governing the fast emergence of AI technology, safety and data privacy seem to be their more pressing issues. Nvidia said that its new content safety NIM microservice was trained using the Aegis Content Safety Dataset, which includes more than 35,000 human-annotated data samples, and is curated and owned by Nvidia but publicly available on Hugging Face. Nvidia in a blog post called it “one of the highest-quality, human-annotated data sources in its category.”

The NIM microservices for NeMo Guardrails are available now, and already are being used by several organizations, some of which, like Amdocs, Cerence AI, Taskus, Tech Mahindra, Wipro, and more.

For example, software firm Amdocs is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate and contextually appropriate responses, according to the Nvidia blog post.

“Technologies like NeMo Guardrails are essential for safeguarding generative AI applications, helping make sure they operate securely and ethically,” said Anthony Goonetilleke, group president of technology and head of strategy at Amdocs. “By integrating Nvidia NeMo Guardrails into our amAIz platform, we are enhancing the platform’s ‘Trusted AI’ capabilities to deliver agentic experiences that are safe, reliable and scalable. This empowers service providers to deploy AI solutions safely and with confidence, setting new standards for AI innovation and operational excellence.”