Guide
AI API Security Best Practices: Keys, Auth, and Data Protection
April 17, 2026 · 8 min read
AI APIs handle sensitive data — customer queries, proprietary documents, business logic. A leaked API key or a prompt injection attack can expose everything. This guide covers production security practices every developer should implement.
API Key Management
Rule 1: Never Hardcode Keys
# WRONG — key in source code
client = OpenAI(api_key="sk-abc123...")
# RIGHT — environment variable
import os
client = OpenAI(
base_url="https://api.aipower.me/v1",
api_key=os.environ["AIPOWER_API_KEY"],
)
# RIGHT — secrets manager (production)
from aws_secretsmanager import get_secret
client = OpenAI(
base_url="https://api.aipower.me/v1",
api_key=get_secret("aipower-api-key"),
)Rule 2: Use Separate Keys for Each Environment
| Environment | Key Prefix | Rate Limit | Budget Cap |
|---|---|---|---|
| Development | dev_ | 10 RPM | $5/month |
| Staging | stg_ | 100 RPM | $50/month |
| Production | prod_ | 600 RPM | $500/month |
| CI/CD | ci_ | 50 RPM | $20/month |
Request Authentication Patterns
# Validate API key on your backend — never expose it to frontend
from flask import Flask, request, jsonify
from openai import OpenAI
app = Flask(__name__)
client = OpenAI(base_url="https://api.aipower.me/v1", api_key=os.environ["AIPOWER_API_KEY"])
@app.route("/api/chat", methods=["POST"])
def chat():
# Authenticate YOUR user first
user_token = request.headers.get("Authorization")
if not validate_user_token(user_token):
return jsonify({"error": "Unauthorized"}), 401
# Then call AI API from backend
user_message = request.json["message"]
# Sanitize input — basic prompt injection defense
if len(user_message) > 4000:
return jsonify({"error": "Message too long"}), 400
response = client.chat.completions.create(
model="deepseek/deepseek-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message},
],
max_tokens=1000, # Limit output to control costs
)
return jsonify({"response": response.choices[0].message.content})Prompt Injection Defense
- Separate system and user content — never concatenate user input into system prompts
- Input validation — reject excessively long inputs, strip control characters
- Output filtering — check AI responses before showing to users
- Use structured output —
response_format={"type": "json_object"}constrains responses - Rate limit per user — prevent abuse from individual accounts
Data Privacy Checklist
- Check data retention policies — AIPower does not store request/response data
- Redact PII before sending — strip names, emails, phone numbers from prompts
- Use the minimum context needed — don't send entire documents when a paragraph suffices
- Implement audit logging — log which users made which requests (without logging full content)
- Review model provider terms — some providers train on your data unless you opt out
- Encrypt API keys at rest — use vault solutions, not plaintext config files
Security Comparison: API Gateways
| Feature | AIPower | Direct Provider |
|---|---|---|
| Data retention | No storage | Varies (30 days default on OpenAI) |
| Key rotation | Dashboard — instant | Provider-specific |
| Usage monitoring | Real-time dashboard | Delayed (hours) |
| Budget caps | Per-key limits | Account-level only |
| Single key for all models | Yes | No (one per provider) |
Secure your AI integrations from day one. Start at aipower.me — no data retention, instant key rotation, real-time monitoring. 50 free API calls to get started.