GLM 4.6

GLM 4.6 via Fireworks AI

Specifications

Context Window

198,000 tokens

Release Date

2025-10-01

Capabilities

ReasoningTool callingTemperature

Availability

Open Weights

Model Overview

Fireworks AI is an inference platform optimized for speed and cost-efficiency. They host a wide range of open-source models with fast response times and developer-friendly APIs.

GLM 4.6 is a glm-4-family model by Fireworks AI with a 198k token context window and up to 198k output tokens. It is priced at $0.5500/1M input tokens and $2.19/1M output tokens.

Key capabilities include: reasoning, tool calling, temperature. It supports advanced reasoning for complex multi-step tasks. It can call external tools and functions for agentic workflows.

Details

ProviderFireworks AI
Model IDaccounts/fireworks/models/glm-4p6
Familyglm-4
Release Date2025-10-01
Last Updated2025-10-01
Knowledge Cutoff2025-04
Context Window198,000 tokens
Max Output198,000 tokens
Input Cost / 1M$0.5500
Output Cost / 1M$2.19
Cache Read / 1M$0.2800

More models from Fireworks AI

Frequently Asked Questions

How much does GLM 4.6 cost to use?

GLM 4.6 is priced at $0.5500/1M input tokens and $2.19/1M output tokens. Use the cost estimator on this page to calculate your expected spend based on your usage pattern.

What is a token and how does it relate to pricing?

A token is a chunk of text — roughly ¾ of a word in English. For example, "chatbot" is two tokens. LLM API pricing is based on the number of tokens you send (input) and receive (output). Input tokens include your prompts, uploaded documents, and images, while output tokens are the model's generated responses.

Why are input and output tokens priced differently?

LLM providers charge separately for input and output tokens. Output tokens are typically more expensive because generating each token requires more compute — the model must run a full forward pass for every token it produces, while input tokens are processed in parallel.

What is the context window of GLM 4.6?

GLM 4.6 supports a context window of 198,000 tokens. This is the maximum number of tokens (input + output combined) the model can process in a single request. Larger context windows let you send longer documents or maintain longer conversation histories.

How accurate is this cost estimation?

This tool provides a ballpark estimate based on per-token pricing. Actual costs may differ due to prompt caching, batched API calls, volume discounts, reasoning token overhead, and provider-specific billing rules. Use it for budgeting and comparison, not as an invoice prediction.

How does GLM 4.6 pricing compare to other models?

You can compare GLM 4.6 with other models on our LLM API pricing calculator. Use the cost estimator to see side-by-side cost breakdowns across different providers and models to find the best fit for your budget and requirements.

What factors affect my total API cost?

Your total cost depends on several factors: the number of API calls you make, the length of your prompts (input tokens), the length of generated responses (output tokens), whether you use features like image or document uploads (which add input tokens), and any provider-specific charges for caching or batch processing.

How can my team use GLM 4.6 via API?

You can connect your own Fireworks AI API key and give your entire team access to GLM 4.6 through TypingMind Teams. It lets you build a unified AI workspace where team members can use GLM 4.6 and other models — without needing their own API keys. You stay in control of usage limits, costs, and permissions, all from a single dashboard.

Best-in-Class platform to create your team's AI workspace