# Groq (/docs/providers/groq)


Overview [#overview]

[Groq](https://groq.com) provides ultra-fast inference using their custom LPU (Language Processing Unit) hardware, achieving extremely low latency responses.

**Official Website:** [https://groq.com](https://groq.com)
&#x2A;*API Documentation:** [https://console.groq.com/docs](https://console.groq.com/docs)

Key Features [#key-features]

* **Ultra-Fast Inference** — LPU-accelerated for minimal latency
* **High Throughput** — Excellent for real-time applications
* **Open-Source Models** — Hosted versions of popular open-source models
* **Function Calling** — Tool use support

Usage Example [#usage-example]

```python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.yuhuanstudio.com/v1"
)

response = client.chat.completions.create(
    model="model-id",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
```

Available Models [#available-models]

Use the [Models API](/docs/models-api) to query available models:

```bash
curl https://api.yuhuanstudio.com/v1/models?provider=groq \
  -H "Authorization: Bearer YOUR_API_KEY"
```

<Callout type="info">
  Models and pricing are synced automatically from Groq. Check the dashboard for current availability and rates.
</Callout>

Official Resources [#official-resources]

* [Groq Website](https://groq.com)
* [Console](https://console.groq.com)
* [Documentation](https://console.groq.com/docs)
