Models / Meta
Moderation

Llama Guard (7B)

7B Llama 2-based safeguard model for classifying LLM inputs and outputs, detecting unsafe content and policy violations.

About model

Llama Guard (7B) detects and flags potentially harmful or sensitive content, serving as a tool for developers and content moderators to ensure online safety and compliance. Its key strength lies in its ability to identify nuanced and context-dependent threats. It is suited for applications requiring robust content moderation.

This model is currently only available for Monthly Reserved deployments. Please request a deployment to get started.

    Related models
    • Model provider
      Meta
    • Type
      Moderation
    • Main use cases
      Moderation
    • Deployment
      Monthly Reserved
    • Parameters
      7B
    • Context length
      4096
    • Input modalities
      Text
    • Output modalities
      Text
    • Category
      Moderation