Guardrails AI

Adding guardrails to large language models

About Guardrails AI

Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). Guardrails:

  • does pydantic-style validation of LLM outputs. This includes semantic validation such as checking for bias in generated text, - checking for bugs in generated code, etc.
  • takes corrective actions (e.g. reasking LLM) when validation fails,
  • enforces structure and type guarantees (e.g. JSON).

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker