Mistral Large 24.11

Mistral Large 24.11 via GitHub Models

Specifications

Context Window

128,000 tokens

Release Date

2024-11-01

Capabilities

ReasoningTool callingTemperature

Availability

Proprietary API

Model Overview

GitHub Models provides direct access to AI models through GitHub's platform, enabling developers to experiment with and deploy models alongside their code repositories.

Mistral Large 24.11 is a mistral-large-family model by GitHub Models with a 128k token context window and up to 33k output tokens. It is priced at $0.00/1M input tokens and $0.00/1M output tokens.

Key capabilities include: reasoning, tool calling, temperature. It supports advanced reasoning for complex multi-step tasks. It can call external tools and functions for agentic workflows.

Details

ProviderGitHub Models
Model IDmistral-ai/mistral-large-2411
Familymistral-large
Release Date2024-11-01
Last Updated2024-11-01
Knowledge Cutoff2024-09
Context Window128,000 tokens
Max Output32,768 tokens
Input Cost / 1M$0.00
Output Cost / 1M$0.00

More models from GitHub Models

Frequently Asked Questions

How much does Mistral Large 24.11 cost to use?

Mistral Large 24.11 is priced at $0.00/1M input tokens and $0.00/1M output tokens. Use the cost estimator on this page to calculate your expected spend based on your usage pattern.

What is a token and how does it relate to pricing?

A token is a chunk of text — roughly ¾ of a word in English. For example, "chatbot" is two tokens. LLM API pricing is based on the number of tokens you send (input) and receive (output). Input tokens include your prompts, uploaded documents, and images, while output tokens are the model's generated responses.

Why are input and output tokens priced differently?

LLM providers charge separately for input and output tokens. Output tokens are typically more expensive because generating each token requires more compute — the model must run a full forward pass for every token it produces, while input tokens are processed in parallel.

What is the context window of Mistral Large 24.11?

Mistral Large 24.11 supports a context window of 128,000 tokens. This is the maximum number of tokens (input + output combined) the model can process in a single request. Larger context windows let you send longer documents or maintain longer conversation histories.

How accurate is this cost estimation?

This tool provides a ballpark estimate based on per-token pricing. Actual costs may differ due to prompt caching, batched API calls, volume discounts, reasoning token overhead, and provider-specific billing rules. Use it for budgeting and comparison, not as an invoice prediction.

How does Mistral Large 24.11 pricing compare to other models?

You can compare Mistral Large 24.11 with other models on our LLM API pricing calculator. Use the cost estimator to see side-by-side cost breakdowns across different providers and models to find the best fit for your budget and requirements.

What factors affect my total API cost?

Your total cost depends on several factors: the number of API calls you make, the length of your prompts (input tokens), the length of generated responses (output tokens), whether you use features like image or document uploads (which add input tokens), and any provider-specific charges for caching or batch processing.

How can my team use Mistral Large 24.11 via API?

You can connect your own GitHub Models API key and give your entire team access to Mistral Large 24.11 through TypingMind Teams. It lets you build a unified AI workspace where team members can use Mistral Large 24.11 and other models — without needing their own API keys. You stay in control of usage limits, costs, and permissions, all from a single dashboard.

Best-in-Class platform to create your team's AI workspace