How it works

Learn how Guards protects your LLM applications.

Guards acts as a security layer between your application and your LLM. It analyzes messages against your configured policies before they reach the model, and can also screen responses before they're returned to users.

Request flow

1

Send

Your app sends chat messages to the Guards API

2

Evaluate

Guards runs detectors against your policy rules

3

Act

Receive an action: allow, monitor, or block

Key concepts

Policies

A collection of rules that define what content is allowed, monitored, or blocked. Each policy has a unique handle.

Rules

Combine a detector with an action. When a detector triggers, the action determines the response.

Detectors

Analyze content for specific patterns like PII, keywords, or jailbreak attempts.

Performance

Typical latency: under 50ms

Most detectors run in under 10ms. Even policies with multiple rules typically complete in under 50ms total. Deep analysis detectors that provide more thorough checks add around 150ms. Guards is designed to add minimal overhead to your application.

Capabilities

Languages

Multilingual support including English, French, Spanish, German, Portuguese, Chinese, Japanese, Korean, Arabic, Hindi, Russian, and 20+ more languages.

Modalities

Text-based content analysis for prompts and responses. Support for images, audio, and documents is planned.