AIPower provides an OpenAI-compatible API. If you're using the OpenAI SDK, just change the base URL.
1. Get your API key from the dashboard.
2. Use any OpenAI-compatible SDK with our base URL:
https://api.aipower.me/v1Use Bearer token authentication with your API key:
Authorization: Bearer YOUR_API_KEYPOST /v1/chat/completions
Creates a model response for the given chat conversation. Supports streaming.
curl https://api.aipower.me/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"max_tokens": 1000,
"stream": false
}'from openai import OpenAI
client = OpenAI(
base_url="https://api.aipower.me/v1",
api_key="YOUR_API_KEY",
)
# Non-streaming
response = client.chat.completions.create(
model="deepseek/deepseek-chat",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Streaming
stream = client.chat.completions.create(
model="deepseek/deepseek-chat",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.aipower.me/v1',
apiKey: 'YOUR_API_KEY',
});
const response = await client.chat.completions.create({
model: 'deepseek/deepseek-chat',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model ID (e.g., deepseek/deepseek-chat) |
| messages | array | Yes | Array of message objects with role and content |
| temperature | number | No | Sampling temperature (0-2). Default: 1 |
| max_tokens | integer | No | Maximum tokens to generate |
| stream | boolean | No | Stream response via SSE. Default: false |
| top_p | number | No | Nucleus sampling. Default: 1 |
| stop | string|array | No | Stop sequences |
GET /v1/models
Returns all available models with pricing info.
curl https://api.aipower.me/v1/models| Model ID | Input $/M | Output $/M | Context |
|---|---|---|---|
| deepseek/deepseek-chat | $0.50 | $0.80 | 64K |
| deepseek/deepseek-reasoner | $0.50 | $0.80 | 64K |
| qwen/qwen-turbo | $0.12 | $0.50 | 128K |
| qwen/qwen-plus | $0.20 | $2.80 | 128K |
| minimax/minimax-text-01 | $0.50 | $2.00 | 1M |
| Parameter | Type | Required | Description |
|---|---|---|---|
| 401 | No | Invalid or missing API key | |
| 402 | No | Insufficient credits. Top up at /dashboard/billing | |
| 404 | No | Model not found | |
| 429 | No | Rate limit exceeded | |
| 502 | No | Upstream provider error |
AIPower uses a prepaid credits system. Add credits via Stripe, and usage is deducted per API call based on token count.
Free tier: 1M tokens included on signup.
Check your balance and add credits at /dashboard/billing.