OpenAI GPT API Pricing Calculator
Wondering about how the OpenAI gpt-4o-mini API pricing works? Here's a pricing calculator.
OpenAI gpt-4o-mini
128k context
Calculated by
Pricing calculation
Provider | Model | Context | Input/1k Tokens | Output/1k Tokens | Per Call | Total |
---|---|---|---|---|---|---|
OpenAI | gpt-4o-mini | 128k | $0.0002 | $0.0006 | $0.001 | $0.076 |
Need more detailed calculations for each model?Click on the model to get more details
Announcing GPT-4o Mini: Cost-Efficient Intelligence
Release Date and Announcement
On July 18, 2024, OpenAI announced the release of GPT-4o Mini, a highly cost-efficient small model designed to make advanced AI capabilities more accessible and affordable.
Key Capabilities
- Superior Textual Intelligence: Scores 82% on MMLU, outperforming other small models like Gemini Flash and Claude Haiku, and supports multiple languages and function calling.
- Efficient Multimodal Reasoning: Handles text and vision tasks excellently, with support for text, image, video, and audio inputs and outputs planned for the future.
- Advanced Math and Coding: Excels in mathematical reasoning and coding, scoring high on benchmarks like MGSM and HumanEval.
- Long-Context Performance: Performs well with extensive context, enabling applications that require large volumes of data or conversation history.
Built-in Safety Measures
GPT-4o Mini incorporates robust safety measures, filtering out inappropriate content during pre-training and employing reinforcement learning with human feedback (RLHF) for post-training alignment. It applies new safety techniques to resist jailbreaks and prompt injections, ensuring reliable and safe use at scale.
Collaborations and Benchmarks
GPT-4o Mini has been evaluated on several key benchmarks, outperforming other small models in tasks involving text and vision. Trusted partners like Ramp and Superhuman have tested it for extracting structured data and generating high-quality email responses, noting its superior performance over previous models like GPT-3.5 Turbo.