Llama Guard 2 8B
8B Llama 3-based safeguard model for classifying LLM inputs and outputs, detecting unsafe content and policy violations.
About model
Llama Guard 2 8B detects and flags potentially harmful or sensitive content, serving as a robust tool for developers and content moderators seeking to ensure online safety and compliance. Its key strength lies in accurately identifying nuanced and context-dependent threats. It is designed for professionals managing online platforms and communities.
To use this moderation model, please follow the instructions from our blog post.
- TypeModeration
- Main use casesSmall & FastModeration
- DeploymentMonthly Reserved
- Parameters8B
- Context length8K
- Input price
$0.20 / 1M tokens
- Output price
$0.20 / 1M tokens
- Input modalitiesText
- Output modalitiesText
- ReleasedApril 18, 2024
- Last updatedFebruary 24, 2026
- External link
- CategoryModeration