Interested in running Llama 4 Maverick in production?

Request access to Together Dedicated Endpoints—private and fast Llama 4 Maverick inference at scale.

  • Fastest inference: Industry-leading speeds for multimodal AI
  • Flexible scaling: Deploy via Together Serverless or dedicated endpoints
  • Native multimodality: Text and image understanding with 128K context
  • Secure & reliable: Private, compliant, and built for production

First name*

Last Name*

COMPANY Email*

COMPANY LOCATION*

What peak QUERIES PER SECOND would you like to support?*

Are you interested in nvidia DGX cloud?*

Thank you for reaching out.

We'll get back to you shortly!

Oops! Something went wrong while submitting the form.

Llama 4 Maverick on Together AI

Unmatched performance. Cost-effective scaling. Secure infrastructure.

  • Fastest inference engine

    We run Llama 4 Maverick with industry-leading speeds on optimized MoE infrastructure, ensuring low-latency performance for multimodal production workloads.

  • Scalable infrastructure

    Whether you're just starting out or scaling to production workloads, choose from Together Serverless APIs for flexible, pay-per-token usage or dedicated endpoints for predictable, high-volume operations.

  • Security-first approach

    We host all models in our own data centers, with no data sharing back to Meta. Developers retain full control over their data with opt-out privacy settings.

Seamlessly scale your Maverick deployment

  • Together Serverless API

    We run Llama 4 Maverick with industry-leading speeds on optimized MoE infrastructure, ensuring low-latency performance for multimodal production workloads.

    • Instant scalability and generous rate limits
    • Flexible, pay-per-token pricing with no long-term commitments
    • Full opt-out privacy controls
  • Together Dedicated Endpoints

    Whether you're just starting out or scaling to production workloads, choose from Together Serverless APIs for flexible, pay-per-token usage or dedicated endpoints for predictable, high-volume operations.

    • Low latency from Together Inference stack
    • High-performance GPUs optimized for MoE architecture
    • Contract-based pricing for predictable, cost-effective scaling

Powering the next generation of AI applications

Use our API to deploy Llama 4 Maverick on the fastest inference stack available with optimal cost efficiency. Servers are available in North America with complete data privacy controls.