This website uses cookies to anonymously analyze website traffic using Google Analytics.
Company

OpenAI's New Open gpt-oss Models vs o4-mini: A Real-World Comparison

August 11, 2025

By 

Hassan El Mghari

OpenAI just made history by open-sourcing their first language models in over six years. The gpt-oss series includes two reasoning models: a 20B parameter model comparable to o3-mini, and a 120B model that supposedly rivals o4-mini. As champions of open source AI, we were excited to see how these models perform in practice.

At Together AI, we believe the future of AI is open source. That's why we immediately added the gpt-oss models to our platform and decided to put the 120B model head-to-head against o4-mini across five practical tests. Instead of relying solely on benchmarks, we wanted to show our developer community how these models compare in real-world scenarios.

Why gpt-oss Models Align with Our Mission

These models represent exactly what we've been advocating for:

  • Complete model ownership - Download, fine-tune, and deploy however you want
  • No vendor lock-in - Apache 2.0 license means true freedom
  • Exceptional value - 100x cheaper than Claude Opus 4.1 with competitive performance
  • State-of-the-art capabilities - Strong reasoning, agentic abilities, and structured outputs
  • Open innovation - The AI community can build, improve, and customize freely

Our Testing Methodology

We used our own chat.together.ai interface running gpt-oss-120B against ChatGPT with o4-mini. While not comprehensive scientific benchmarks, these practical tests give developers a feel for real-world performance - the kind of tasks our customers tackle daily.

Test 1: Terminal Snake Game Development

The Challenge: Build a functional snake game that runs in the terminal.Results:

  • o4-mini: Generated code that compiled but failed functionally˙- snake only moved horizontally despite arrow key inputs
  • gpt-oss-120B on Together AI: Created a fully working snake game with proper controls, collision detection, and game-over mechanics
Snake Game Developed using gpt-oss-120B on Together AI

This test highlights something we see regularly: open source models often excel at practical code generation tasks.

Winner: gpt-oss-120B ✓

Test 2: Creative SVG Generation

The Challenge: Generate an SVG of a pelican riding a bicycle.

Results:

  • o4-mini: Produced a clean, well-structured SVG with accurate spatial relationships
  • gpt-oss-120B: Created functional SVG but with physics issues - pelican appeared to float above the bicycle
image.png
o4-mini output
image.png
gpt-oss output

Creative tasks can be challenging, and this shows areas where different models have varying strengths.

Winner: o4-mini ✓ (gpt-oss gets 0.5 points for partial success)

Test 3: Advanced Instruction Following

The Challenge: Rewrite the US Declaration of Independence's first two paragraphs in cyberpunk style while preserving historical references.

We wanted an objective evaluation, so we used powerful reasoning models (including DeepSeek R1, also available on Together AI) to judge both outputs.

Results: Both reasoning models unanimously selected gpt-oss-120B for:

  • Superior balance of complex requirements
  • Better historical accuracy
  • Clearer, more engaging presentation
  • More effective style integration
image.png

This demonstrates the sophisticated instruction-following capabilities that make open source models viable for complex enterprise use cases.

Winner: gpt-oss-120B ✓

Test 4: Mathematical Reasoning

The Challenge: Classic algebra word problem with chickens and cows (196 legs, 68 heads total).

Results: Both models correctly solved the problem, arriving at 38 chickens and 30 cows with clear mathematical reasoning.

image.png

Winner: Tie ✓✓

Test 5: Web-Enhanced Information Synthesis

The Challenge: Research and summarize a current NSA program status in under 200 words.

Results: Both models demonstrated strong capabilities:

  • Effective web search integration
  • Accurate synthesis of multiple sources
  • Proper adherence to word limits and formatting requirements
image.png

Winner: Tie ✓✓

Final Results: Open Source Delivers

  • gpt-oss-120B on Together AI: 4.5/5
  • o4-mini: 3/5
image.png

The Open Source Advantage in Action

This comparison reinforces why we're bullish on open source AI. gpt-oss-120B delivers competitive performance while offering:

  • Full customization rights - Fine-tune for your specific use cases
  • Cost efficiency - Run inference at fraction of proprietary model costs
  • Deployment flexibility - Host on Together Cloud, your VPC, or on-premise
  • No usage restrictions - Build commercial applications without limitations

Experience gpt-oss on Together AI

Ready to try these groundbreaking open source models? We've optimized gpt-oss for peak performance on our platform:

Fastest inference speeds - Our inference engine delivers 4x faster performance than standard implementations

Competitive pricing - Up to 11x lower costs compared to proprietary alternatives

Easy integration - OpenAI-compatible APIs for seamless migration

Multiple deployment options - Serverless, dedicated, or private cloud

Try gpt-oss models now →

What This Means for Developers

These results show that high-quality AI capabilities are becoming democratized. Whether you're a startup building your first AI feature or an enterprise scaling mission-critical applications, open source models like gpt-oss offer a compelling alternative to proprietary solutions.

At Together AI, we're committed to making these cutting-edge open source models accessible, fast, and cost-effective for every developer. As the ecosystem continues evolving, we'll keep bringing you the latest and greatest open source innovations.Ready to build with open source AI? Create your Together AI account and start experimenting with gpt-oss models today.

  • Lower
    Cost
    20%
  • faster
    training
    4x
  • network
    compression
    117x

Ready to dive in?

Follow our step-by-step Quickstart to install, authenticate, and run your first GPT-OSS inference in minutes.

Try gpt-oss-120B now

🎯 Technical Deep Dive Webinar: OpenAI’s Latest Models

Join us for an exclusive breakdown of how OpenAI's gpt-oss-120B and gpt-oss-20B actually work. Perfect for developers, researchers, and technical leaders who want to understand the architecture, training innovations, and practical deployment strategies.

Q: Should I use the RedPajama-V2 Dataset out of the box?

RedPajama-V2 is conceptualized as a pool of data that serves as a foundation for creating high quality datasets. The dataset is thus not intended to be used out of the box and, depending on the application, data should be filtered out using the quality signals that accompany the data. With this dataset, we take the view that the optimal filtering of data is dependent on the intended use. Our goal is to provide all the signals and tooling that enables this.

No items found.
Start
building
yours
here →