Models / Virtue AI
Moderation

VirtueGuard Text Lite

Multimodal AI guardrail model covering 12 risk categories with 8ms latency and 89% F1 accuracy, outperforming AWS Bedrock and Azure by 50x speed while reducing false positives.

About model

Enterprise-grade AI security and safety model with comprehensive protection across text, images, and audio. Built by AI security veterans for production-scale deployment with 8ms response time.

  • API usage

    • cURL
    • Python
    • Typescript

    Endpoint:

    virtueai/VirtueGuard

    curl -X POST https://api.together.xyz/v1/completions \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $TOGETHER_API_KEY" \
      -d '{
        "model": "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
        "prompt": "A horse is a horse",
        "max_tokens": 32,
        "temperature": 0.1,
        "safety_model": "virtueai/VirtueGuard"
      }'
    
    from together import Together
    
    client = Together()
    
    response = client.completions.create(
      model="meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
      prompt="A horse is a horse",
      max_tokens=32,
      temperature=0.1,
      safety_model="virtueai/VirtueGuard",
    )
    
    print(response.choices[0].text)
    
    
    import Together from "together-ai";
    
    const together = new Together();
    
    async function main() {
      const response = await together.completions.create({
        model: "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
        prompt: "A horse is a horse",
        max_tokens: 32,
        temperature: 0.1,
        safety_model: "virtueai/VirtueGuard"
      });
      
      console.log(response.choices[0]?.text);
    }
    
    main();
    
    
Related models
  • Model provider
    Virtue AI
  • Type
    Moderation
  • Main use cases
    Moderation
  • Deployment
    Serverless
  • Context length
    131K
  • Input price

    $0.20 / 1M tokens

  • Input modalities
    Text
  • Output modalities
    Text
  • Category
    Moderation