Messages API
Anthropic-compatible Messages API for native Claude integration.
The Messages API provides full compatibility with Anthropic's Messages API format, allowing you to use Claude and other models with their native API structure.
POST /v1/messagesRequest
from anthropic import Anthropic
client = Anthropic(
api_key="YOUR_API_KEY",
base_url="https://api.yuhuanstudio.com/v1"
)
message = client.messages.create(
model="model-id",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
]
)
print(message.content[0].text)Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID to use |
messages | array | Yes | List of messages |
max_tokens | integer | Yes | Maximum tokens to generate |
system | string/array | No | System prompt |
temperature | number | No | Sampling temperature (0-1) |
top_p | number | No | Nucleus sampling (0-1) |
top_k | integer | No | Top-k sampling |
stop_sequences | array | No | Stop sequences |
stream | boolean | No | Enable streaming (default: false) |
tools | array | No | Tool definitions |
tool_choice | object | No | Tool selection behavior |
thinking | object | No | Extended thinking config |
Response
{
"id": "msg_abc123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! How can I help you today?"
}
],
"model": "model-id",
"stop_reason": "end_turn",
"usage": {
"input_tokens": 12,
"output_tokens": 25
}
}Response Fields
| Field | Type | Description |
|---|---|---|
id | string | Unique message ID |
type | string | Always "message" |
role | string | Always "assistant" |
content | array | Content blocks |
model | string | Model used |
stop_reason | string | Reason for stopping |
usage | object | Token usage stats |
System Prompt
Pass a system prompt separately from messages:
response = client.messages.create(
model="model-id",
max_tokens=1024,
system="You are a helpful assistant that speaks like a pirate.",
messages=[
{"role": "user", "content": "Hello!"}
]
)Vision (Images)
Include images in your messages:
response = client.messages.create(
model="model-id",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "<base64-data>"
}
},
{
"type": "text",
"text": "What's in this image?"
}
]
}
]
)Extended Thinking
Enable Claude's extended thinking for complex reasoning:
response = client.messages.create(
model="model-id",
max_tokens=1024,
thinking={
"type": "enabled",
"budget_tokens": 10000
},
messages=[
{"role": "user", "content": "Solve this puzzle..."}
]
)The response will include thinking blocks:
{
"content": [
{
"type": "thinking",
"thinking": "Let me analyze this step by step..."
},
{
"type": "text",
"text": "Based on my analysis..."
}
]
}Streaming
Enable streaming for real-time responses:
with client.messages.stream(
model="model-id",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a story"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)Stream Events
| Event | Description |
|---|---|
message_start | Message started |
content_block_start | Content block started |
content_block_delta | Content delta |
content_block_stop | Content block finished |
message_delta | Message delta with usage |
message_stop | Message finished |
Tool Use
Define tools for function calling:
response = client.messages.create(
model="model-id",
max_tokens=1024,
tools=[
{
"name": "get_weather",
"description": "Get the current weather",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
],
messages=[
{"role": "user", "content": "What's the weather in Tokyo?"}
]
)
# Check for tool use
for block in response.content:
if block.type == "tool_use":
print(f"Tool: {block.name}")
print(f"Args: {block.input}")Use the Models API to discover available models. Check model capabilities to see which features are supported.
How is this guide?