# Providers (/docs/providers)


Provider Architecture [#provider-architecture]

Each provider in Yunxin has an independent adapter that interfaces with the provider's native API. This ensures:

* **100% API fidelity** — All provider-specific features are preserved
* **Optimal format selection** — Yunxin uses each provider's best-suited API format, not blindly defaulting to OpenAI compatibility
* **Independent updates** — New provider features can be supported without affecting other providers
* **Graceful degradation** — If a provider is down, the system can fall back to alternatives

All Providers [#all-providers]

All providers are equal with full API coverage. Models are synced automatically — use the [Models API](/docs/models-api) to query available models.

| Provider                                        | Key Features                                                        |
| ----------------------------------------------- | ------------------------------------------------------------------- |
| **[Alibaba (Tongyi)](/docs/providers/alibaba)** | Multilingual, reasoning mode                                        |
| **[Anthropic](/docs/providers/anthropic)**      | Extended Thinking, long context, Computer Use                       |
| **[Azure AI Foundry](/docs/providers/azure)**   | Enterprise security, multi-provider, regional deployment            |
| **[Cloudflare](/docs/providers/cloudflare)**    | Edge inference, global CDN                                          |
| **[DeepSeek](/docs/providers/deepseek)**        | Open-source reasoning, cost-effective, strong math/coding           |
| **[GitHub Models](/docs/providers/github)**     | Free tier, GitHub integration, developer-friendly                   |
| **[Google (Gemini)](/docs/providers/google)**   | Long context, native multimodal, Deep Thinking                      |
| **[Groq](/docs/providers/groq)**                | LPU-accelerated, ultra-fast inference                               |
| **[Hugging Face](/docs/providers/huggingface)** | Largest model hub, Serverless Inference API                         |
| **[LM Studio](/docs/providers/lmstudio)**       | Desktop GUI, local inference, model management                      |
| **[MiniMax](/docs/providers/minimax)**          | Multimodal generation, TTS, video generation                        |
| **[Mistral AI](/docs/providers/mistral)**       | Open-source, code generation, real-time audio                       |
| **[ModelScope](/docs/providers/modelscope)**    | Chinese models, multimodal, free tier                               |
| **[Moonshot](/docs/providers/moonshot)**        | Long context, document analysis, strong reasoning                   |
| **[NVIDIA NIM](/docs/providers/nvidia)**        | GPU-accelerated, enterprise-grade                                   |
| **[Ollama](/docs/providers/ollama)**            | Simple local deployment, CLI tool, model library                    |
| **[OpenAI](/docs/providers/openai)**            | Full suite, Responses API, Realtime API, vision, embeddings         |
| **[OpenRouter](/docs/providers/openrouter)**    | Model marketplace, price comparison, automatic fallback             |
| **[vLLM](/docs/providers/vllm)**                | High-throughput serving, PagedAttention, production deployment      |
| **[Volcengine](/docs/providers/volcengine)**    | ByteDance ecosystem, video generation, simultaneous interpretation  |
| **[xAI](/docs/providers/xai)**                  | Deep Thinking, long context, real-time knowledge, structured output |
| **[Z.AI](/docs/providers/zai)**                 | Professional-grade models, enterprise-focused, high reliability     |
| **[Zhipu AI](/docs/providers/zhipu)**           | Chinese-optimized, strong bilingual support                         |

<Callout type="info">
  All providers support OpenAI-compatible API format through Yunxin, ensuring seamless integration with your existing code.
</Callout>

Provider Configuration [#provider-configuration]

Providers are configured at the system level by administrators:

1. Navigate to **Dashboard → Admin → Providers**
2. Enable the desired provider
3. Configure API credentials
4. Set custom endpoint URLs if needed (for self-hosted instances)

Provider Health [#provider-health]

Check provider status via the API:

```bash
curl https://api.yuhuanstudio.com/v1/providers \
  -H "Authorization: Bearer YOUR_API_KEY"
```

Response includes real-time health status for each provider.

Related Topics [#related-topics]

* [Authentication](/docs/authentication) - Learn how to configure API keys
* [Rate Limits](/docs/rate-limits) - Understand usage limits
* [Best Practices](/docs/best-practices) - Optimization tips
* [Vision](/docs/vision) - Multimodal capabilities
* [Function Calling](/docs/function-calling) - Tool use guide
