Guardrails AI
Adding guardrails to large language models
About Guardrails AI
Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). Guardrails:
- does pydantic-style validation of LLM outputs. This includes semantic validation such as checking for bias in generated text, - checking for bugs in generated code, etc.
- takes corrective actions (e.g. reasking LLM) when validation fails,
- enforces structure and type guarantees (e.g. JSON).