NeMo-Guardrails

An open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

Über NeMo-Guardrails

Large language models (LLMs) are incredibly powerful and capable of answering complex questions, performing feats of creative writing, developing, debugging source code, and so much more. You can build incredibly sophisticated LLM applications by connecting them to external tools, for example reading data from a real-time source, or enabling an LLM to decide what action to take given a user’s request. Yet, building these LLM applications in a safe and secure manner is challenging.

NeMo Guardrails is an open-source toolkit for easily developing safe and trustworthy LLM conversational systems. Because safety in generative AI is an industry-wide concern, NVIDIA designed NeMo Guardrails to work with all LLMs, including OpenAI’s ChatGPT.

This toolkit is powered by community-built toolkits such as LangChain, which has gathered ~30K stars on GitHub within just a few months. The toolkits provide composable, easy-to-use templates and patterns to build LLM-powered applications by gluing together LLMs, APIs, and other software packages.

Source: https://developer.nvidia.com/blog/nvidia-enables-trustworthy-safe-and-secure-large-language-model-conversational-systems/

NeMo-Guardrails screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker