- Free Tools
- LLM Cost Estimator
- Phi-4-Mini

Phi-4-Mini via Nvidia
Specifications
Context Window
131,072 tokens
Release Date
2024-12-01
Capabilities
AttachmentsReasoningTool callingTemperatureImage inputAudio input
Availability
Proprietary API
Model Overview
NVIDIA provides AI inference through their NIM (NVIDIA Inference Microservices) platform, offering optimized access to both NVIDIA-developed and popular open-source models on their GPU infrastructure.
Phi-4-Mini is a phi-family model by Nvidia with a 131k token context window and up to 8k output tokens. It is priced at $0.00/1M input tokens and $0.00/1M output tokens.
Key capabilities include: attachments, reasoning, tool calling, temperature, image input, audio input. It supports advanced reasoning for complex multi-step tasks. It can call external tools and functions for agentic workflows.






