Phi 3.5 Moe Instruct

Phi 3.5 Moe Instruct via Nvidia

Specifications

Context Window

128,000 tokens

Release Date

2024-08-17

Capabilities

Tool callingStructured outputTemperature

Availability

Open Weights

Model Overview

NVIDIA provides AI inference through their NIM (NVIDIA Inference Microservices) platform, offering optimized access to both NVIDIA-developed and popular open-source models on their GPU infrastructure.

Phi 3.5 Moe Instruct is a model by Nvidia with a 128k token context window and up to 4k output tokens. It is priced at $0.00/1M input tokens and $0.00/1M output tokens.

Key capabilities include: tool calling, structured output, temperature. It can call external tools and functions for agentic workflows.

Details

ProviderNvidia
Model IDmicrosoft/phi-3.5-moe-instruct
FamilyN/A
Release Date2024-08-17
Last Updated2024-08-17
Knowledge CutoffN/A
Context Window128,000 tokens
Max Output4,096 tokens
Input Cost / 1M$0.00
Output Cost / 1M$0.00

More models from Nvidia

Frequently Asked Questions

How much does Phi 3.5 Moe Instruct cost to use?

Phi 3.5 Moe Instruct is priced at $0.00/1M input tokens and $0.00/1M output tokens. Use the cost estimator on this page to calculate your expected spend based on your usage pattern.

What is a token and how does it relate to pricing?

A token is a chunk of text — roughly ¾ of a word in English. For example, "chatbot" is two tokens. LLM API pricing is based on the number of tokens you send (input) and receive (output). Input tokens include your prompts, uploaded documents, and images, while output tokens are the model's generated responses.

Why are input and output tokens priced differently?

LLM providers charge separately for input and output tokens. Output tokens are typically more expensive because generating each token requires more compute — the model must run a full forward pass for every token it produces, while input tokens are processed in parallel.

What is the context window of Phi 3.5 Moe Instruct?

Phi 3.5 Moe Instruct supports a context window of 128,000 tokens. This is the maximum number of tokens (input + output combined) the model can process in a single request. Larger context windows let you send longer documents or maintain longer conversation histories.

How accurate is this cost estimation?

This tool provides a ballpark estimate based on per-token pricing. Actual costs may differ due to prompt caching, batched API calls, volume discounts, reasoning token overhead, and provider-specific billing rules. Use it for budgeting and comparison, not as an invoice prediction.

How does Phi 3.5 Moe Instruct pricing compare to other models?

You can compare Phi 3.5 Moe Instruct with other models on our LLM API pricing calculator. Use the cost estimator to see side-by-side cost breakdowns across different providers and models to find the best fit for your budget and requirements.

What factors affect my total API cost?

Your total cost depends on several factors: the number of API calls you make, the length of your prompts (input tokens), the length of generated responses (output tokens), whether you use features like image or document uploads (which add input tokens), and any provider-specific charges for caching or batch processing.

How can my team use Phi 3.5 Moe Instruct via API?

You can connect your own Nvidia API key and give your entire team access to Phi 3.5 Moe Instruct through TypingMind Teams. It lets you build a unified AI workspace where team members can use Phi 3.5 Moe Instruct and other models — without needing their own API keys. You stay in control of usage limits, costs, and permissions, all from a single dashboard.

Best-in-Class platform to create your team's AI workspace