Llama Guard 4 12B
Multimodal safety model from Llama 4 Scout, classifying text and images for safe LLM prompts and responses.
About model
Llama Guard 4 12B detects and mitigates harmful content, providing a safe environment for users. Its key strength lies in accurately identifying sensitive information. It is designed for developers and organizations requiring robust content moderation.
To use this moderation model, please follow the instructions from our blog post.
API usage
Endpoint:
How to use model
1. Use Llama Guard as a standalone classifier
Use this code snippet in your command line to run inference on Llama Guard 4 12B:
2. Use Llama Guard as a filter to safeguard responses from 200+ models
Use this code snippet in your command line to run inference of any of our 200+ models together with Llama Guard (the only change is adding the safety_model parameter):
- TypeModeration
- Main use casesModeration
- DeploymentServerlessOn-Demand DedicatedMonthly Reserved
- Endpoint
- Parameters12B
- Context length1M
- Input price
$0.20 / 1M tokens
- Output price
$0.20 / 1M tokens
- Input modalitiesTextImage
- Output modalitiesText
- ReleasedApril 23, 2025
- External link
- CategoryModeration