Gemma-4-31B-it-Pearl
Reasoning with thinking mode at 25% discounted pricing

About model
Gemma 4 31B-it-Pearl is Pearl Research Labs' instruction-tuned checkpoint of Google's Gemma 4 31B, optimized for the Pearl Network's Proof of Useful Work protocol. It delivers capabilities similar tothe base Gemma 4 31B — text input, 256K context, function calling, JSON mode — at a 25%+ discount through Together AI's exclusive Pearl Network integration.
Learn more in our announcement blog.
25%
Powered by Pearl's Proof of Useful Work protocol
256K
With hybrid attention for long-context optimization
89.20%
Mathematical reasoning without tools
- 25% Discounted Pricing: Powered by Pearl's Proof of Useful Work — mining rewards subsidize inference costs with zero impact on model quality or throughput
- Configurable Thinking: Built-in reasoning mode for step-by-step problem solving with 85.35% GPQA Diamond
- Native Function Calling: Structured tool use with JSON mode for agentic workflows
Model | AIME 2025 | GPQA Diamond | HLE | LiveCodeBench | MATH500 | SWE-bench verified |
|---|---|---|---|---|---|---|
Gemma-4-31B-it-Pearl | 89.20% | 85.35% | 19.50% | 80.00% | Related open-source models | Competitor closed-source models |
90.5% | 34.2% | 78.7% | ||||
83.3% | 24.9% | 99.2% | 62.3% | |||
76.8% | 96.4% | 48.9% | ||||
49.2% | 2.7% | 32.3% | 89.3% | 31.0% |
API usage
Endpoint:
Model card
Architecture Overview:
• 30.7B parameter dense transformer with hybrid attention (interleaved local sliding window + full global attention)
• 256K token context window with proportional RoPE for long-context optimization
• Configurable thinking mode for step-by-step reasoning before generating answers
• Native function calling and structured JSON output for agentic workflows
• INT8 quantization
Pearl Proof of Useful Work:
• This endpoint runs on Pearl's Proof of Useful Work (PoUW) protocol, which extracts cryptographic mining proofs as a side effect of standard AI inference
• Model quality and throughput are preserved — the Pearl kernel operates at the matrix multiplication level without affecting model outputs
• Mining rewards flow back as a direct subsidy, enabling 25% discounted pricing
• Zero-knowledge proofs ensure no model weights or user data are exposed
Prompting
Together AI API Access:
• Access Pearl Gemma 4 31B via Together AI APIs using the endpoint pearl-ai/gemma-4-31b-it-pearl
• Authenticate using your Together AI API key in request headers
• Supports thinking mode, function calling, and JSON mode
• Available on Together AI serverless infrastructure
Applications & use cases
Reasoning & Coding:
• Mathematical reasoning with configurable thinking mode
• Code generation, completion, and correction across multiple languages
Agentic Workflows:
• Native function calling with structured JSON output for tool orchestration
• System prompt support for structured multi-turn conversations
• 256K context for processing large codebases and documentation
- Model providerPearl AI
- TypeReasoning
- Main use casesReasoning
- FeaturesFunction CallingJSON Mode
- DeploymentServerless
- Endpoint
- Parameters31B
- Context length256K
- Input price
$0.28 / 1M tokens
- Output price
$0.86 / 1M tokens
- Input modalitiesText
- Output modalitiesText
- Quantization levelINT8
- CategoryChat