Guide
Switch from OpenAI API to AIPower in 5 Minutes: Migration Guide
April 16, 2026 · 5 min read
If you're currently using the OpenAI API, switching to AIPower takes exactly two lines of code. You keep your existing OpenAI SDK, your existing code, and your existing workflow. You gain access to 16 models (including Chinese AI) and potentially save 90% on costs.
The 2-Line Migration
Before (OpenAI direct):
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
# base_url defaults to api.openai.com
)After (AIPower):
from openai import OpenAI
client = OpenAI(
base_url="https://api.aipower.me/v1", # ADD this line
api_key="your-aipower-key", # CHANGE this line
)That's it. Every client.chat.completions.create() call in your codebase now works through AIPower. No other code changes needed.
What Stays the Same
- SDK:
pip install openai— same package - API format:
chat.completions.create()— identical interface - Response format:
response.choices[0].message.content— same structure - Streaming:
stream=True— works identically - Error handling: Same exception types
What You Gain
| Feature | OpenAI Direct | AIPower |
|---|---|---|
| Models available | OpenAI only (4-5) | 16 models, 10 providers |
| Chinese AI models | Not available | DeepSeek, Qwen, GLM, Kimi, Doubao |
| Smart routing | No | auto, auto-cheap, auto-best, auto-code |
| Cheapest option | GPT-4o Mini ($0.15/M) | GLM-4 Flash ($0.01/M) |
| WeChat Pay | No | Yes |
Model Name Mapping
Your existing OpenAI model names still work. Or switch to cheaper alternatives:
# Keep using OpenAI models (with AIPower pricing)
model="openai/gpt-5.4" # GPT-5.4 via AIPower
model="openai/gpt-4o-mini" # GPT-4o Mini via AIPower
# Or switch to cheaper models with one parameter change
model="deepseek/deepseek-chat" # 90% cheaper than GPT-5.4
model="qwen/qwen-turbo" # 96% cheaper than GPT-5.4
model="zhipu/glm-4-flash" # 99% cheaper than GPT-5.4
# Or let AI decide
model="auto" # Best balance of quality and cost
model="auto-cheap" # Cheapest available modelEnvironment Variable Approach
For production apps, use environment variables so you can switch without code changes:
import os
from openai import OpenAI
client = OpenAI(
base_url=os.getenv("AI_BASE_URL", "https://api.aipower.me/v1"),
api_key=os.getenv("AI_API_KEY"),
)
# .env file
# AI_BASE_URL=https://api.aipower.me/v1
# AI_API_KEY=your-aipower-keyNode.js / TypeScript Migration
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.aipower.me/v1", // ADD
apiKey: "your-aipower-key", // CHANGE
});
// Everything else stays the same
const response = await client.chat.completions.create({
model: "deepseek/deepseek-chat",
messages: [{ role: "user", content: "Hello!" }],
});cURL Migration
# Before (OpenAI)
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer sk-openai-key" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hi"}]}'
# After (AIPower) — change URL and key, that's it
curl https://api.aipower.me/v1/chat/completions \
-H "Authorization: Bearer your-aipower-key" \
-d '{"model": "deepseek/deepseek-chat", "messages": [{"role": "user", "content": "Hi"}]}'FAQ
- Is it really just 2 lines? Yes. AIPower is fully OpenAI-compatible. Same SDK, same format.
- Can I still use GPT models? Yes. All OpenAI models are available through AIPower.
- What about function calling? Supported on models that support it (GPT, Claude, DeepSeek).
- Is there added latency? About 20-50ms per request. Negligible for most applications.
- Can I switch back? Yes. Change 2 lines back. No lock-in.
Try it now at aipower.me — 50 free API calls, migration takes 2 minutes. Keep your code, access 16 models, save up to 90%.