# Ollama (/docs/providers/ollama)


Overview [#overview]

[Ollama](https://ollama.com) is a streamlined tool for running and managing large language models locally. It simplifies the process of deploying LLMs on your own machine with an easy-to-use CLI.

**Official Website:** [https://ollama.com](https://ollama.com)
&#x2A;*Documentation:** [https://docs.ollama.com](https://docs.ollama.com)

Key Features [#key-features]

* **Local Deployment** — Run models on your own machine
* **Simple CLI** — Easy model management
* **Model Library** — Browse models at ollama.com/library
* **OpenAI-Compatible API** — Easy integration
* **Custom Models** — Run any GGUF format model

Quick Start [#quick-start]

```bash
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Run a model
ollama run model-id
```

Usage Example [#usage-example]

```python
from openai import OpenAI

client = OpenAI(
    base_url='http://localhost:11434/v1',
    api_key='ollama'
)

response = client.chat.completions.create(
    model="model-id",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
```

Available Models [#available-models]

Use the [Models API](/docs/models-api) to query available models:

```bash
curl https://api.yuhuanstudio.com/v1/models?provider=ollama \
  -H "Authorization: Bearer YOUR_API_KEY"
```

<Callout type="info">
  Models are synced from your Ollama instance. Check the dashboard for current availability.
</Callout>

Official Resources [#official-resources]

* [Ollama Website](https://ollama.com)
* [Documentation](https://docs.ollama.com)
* [Model Library](https://ollama.com/library)
