Nvidia releases more tools and guardrails to nudge enterprises to adopt AI agents



Nvidia is releasing three new NIM microservices, or small independent services that are part of larger applications, to help enterprises bring additional control and safety measures to their AI agents.

One of these new NIM services targets content safety and works to prevent an AI agent from generating harmful or biased outputs. Another works to keep conversations focused on approved topics only, while the third new service helps prevent an AI agent from jailbreak attempts, or removing software restrictions.

These three new NIM microservices are part of Nvidia NeMo Guardrails, Nvidia’s existing open source collection of software tools and microservices meant to help companies improve their AI applications.

“By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may occur when only more general global policies and protections exist — as a one-size-fits-all approach doesn’t properly secure and control complex agentic AI workflows,” the press release said.

It seems that AI companies may be starting to catch on that getting enterprises to adopt their AI agent technology is not going to be as simple as they initially thought. While folks like Salesforce CEO Marc Benioff recently predicted there will be more than a billion agents running off of Salesforce alone in the next 12 months, reality will probably look a little different.

A recent study from Deloitte predicted that about 25% of enterprises are either already using AI agents or expect to in 2025. The report also predicted that by 2027 about half of enterprises will be using agents. This shows that while enterprises are clearly interested in AI agents, they are not adopting AI tech at the same cadence as innovation is happening in the AI space.

Nvidia likely hopes initiatives like this will make adopting AI agents seem more secure, and less experimental. Time will tell if that’s actually true.




Source