- Free Tools
- LLM Cost Estimator
- Kimi K2.5 TEE

Kimi K2.5 TEE via Chutes
Specifications
Context Window
262,144 tokens
Release Date
2026-01-27
Capabilities
ReasoningTool callingStructured outputTemperatureImage inputVideo input
Availability
Open Weights
Model Overview
Chutes is an AI inference platform hosting a variety of open-source and fine-tuned models, providing affordable access to community-driven AI capabilities.
Kimi K2.5 TEE is a kimi-family model by Chutes with a 262k token context window and up to 66k output tokens. It is priced at $0.6000/1M input tokens and $3.00/1M output tokens.
Key capabilities include: reasoning, tool calling, structured output, temperature, image input, video input. It supports advanced reasoning for complex multi-step tasks. It can call external tools and functions for agentic workflows.
Details
ProviderChutes
Model IDmoonshotai/Kimi-K2.5-TEE
Familykimi
Release Date2026-01-27
Last Updated2026-01-27
Knowledge Cutoff2024-10
Context Window262,144 tokens
Max Output65,535 tokens
Input Cost / 1M$0.6000
Output Cost / 1M$3.00
More models from Chutes
Frequently Asked Questions
How much does Kimi K2.5 TEE cost to use?
Kimi K2.5 TEE is priced at $0.6000/1M input tokens and $3.00/1M output tokens. Use the cost estimator on this page to calculate your expected spend based on your usage pattern.
What is a token and how does it relate to pricing?
A token is a chunk of text — roughly ¾ of a word in English. For example, "chatbot" is two tokens. LLM API pricing is based on the number of tokens you send (input) and receive (output). Input tokens include your prompts, uploaded documents, and images, while output tokens are the model's generated responses.
Why are input and output tokens priced differently?
LLM providers charge separately for input and output tokens. Output tokens are typically more expensive because generating each token requires more compute — the model must run a full forward pass for every token it produces, while input tokens are processed in parallel.
What is the context window of Kimi K2.5 TEE?
Kimi K2.5 TEE supports a context window of 262,144 tokens. This is the maximum number of tokens (input + output combined) the model can process in a single request. Larger context windows let you send longer documents or maintain longer conversation histories.
How accurate is this cost estimation?
This tool provides a ballpark estimate based on per-token pricing. Actual costs may differ due to prompt caching, batched API calls, volume discounts, reasoning token overhead, and provider-specific billing rules. Use it for budgeting and comparison, not as an invoice prediction.
How does Kimi K2.5 TEE pricing compare to other models?
You can compare Kimi K2.5 TEE with other models on our LLM API pricing calculator. Use the cost estimator to see side-by-side cost breakdowns across different providers and models to find the best fit for your budget and requirements.
What factors affect my total API cost?
Your total cost depends on several factors: the number of API calls you make, the length of your prompts (input tokens), the length of generated responses (output tokens), whether you use features like image or document uploads (which add input tokens), and any provider-specific charges for caching or batch processing.
How can my team use Kimi K2.5 TEE via API?
You can connect your own Chutes API key and give your entire team access to Kimi K2.5 TEE through TypingMind Teams. It lets you build a unified AI workspace where team members can use Kimi K2.5 TEE and other models — without needing their own API keys. You stay in control of usage limits, costs, and permissions, all from a single dashboard.






