- Free Tools
- LLM Cost Estimator
- Devstral-2-123B-Instruct-2512

Devstral-2-123B-Instruct-2512 via Nvidia
Specifications
Context Window
262,144 tokens
Release Date
2025-12-08
Capabilities
AttachmentsReasoningTool callingStructured outputTemperature
Availability
Open Weights
Model Overview
NVIDIA provides AI inference through their NIM (NVIDIA Inference Microservices) platform, offering optimized access to both NVIDIA-developed and popular open-source models on their GPU infrastructure.
Devstral-2-123B-Instruct-2512 is a devstral-family model by Nvidia with a 262k token context window and up to 262k output tokens. It is priced at $0.00/1M input tokens and $0.00/1M output tokens.
Key capabilities include: attachments, reasoning, tool calling, structured output, temperature. It supports advanced reasoning for complex multi-step tasks. It can call external tools and functions for agentic workflows.






