Tutorial

How to Use DeepSeek R1 Reasoning API: Chain-of-Thought Guide

April 17, 2026 · 7 min read

DeepSeek R1 is unique among AI models: it shows its reasoning process. Unlike GPT or Claude, which reason internally, R1 exposes a visible chain-of-thought (CoT) that lets you see exactly how it arrives at an answer. This transparency is invaluable for debugging, verification, and building trust in AI-generated solutions.

What Makes DeepSeek R1 Different?

Most AI models produce a final answer directly. DeepSeek R1 generates a reasoning trace first, then produces the answer. This means you can:

  • Verify logic: See each step the model took to reach its conclusion
  • Debug errors: Identify exactly where reasoning went wrong
  • Build trust: Show users the reasoning behind AI decisions
  • Improve prompts: Understand how the model interprets your instructions

Quick Start: DeepSeek R1 via API

from openai import OpenAI

client = OpenAI(
    base_url="https://api.aipower.me/v1",
    api_key="YOUR_AIPOWER_KEY",
)

# DeepSeek R1 — reasoning model with visible chain-of-thought
response = client.chat.completions.create(
    model="deepseek/deepseek-reasoner",
    messages=[
        {"role": "user", "content": "Prove that there are infinitely many prime numbers."}
    ],
)

print(response.choices[0].message.content)

R1 for Math and Logic Problems

DeepSeek R1 excels at competition-level math. It scored 97.3% on MATH-500 and 79.8% on AIME 2024, beating GPT-4o on both benchmarks.

# Solve a competition math problem
response = client.chat.completions.create(
    model="deepseek/deepseek-reasoner",
    messages=[{
        "role": "user",
        "content": "Find all positive integers n such that n^2 + 2n + 4 is divisible by 7."
    }],
)
# R1 will show step-by-step: modular arithmetic, case analysis, verification

R1 for Code Debugging

buggy_code = """
def binary_search(arr, target):
    left, right = 0, len(arr)
    while left < right:
        mid = (left + right) / 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid
        else:
            right = mid
    return -1
"""

response = client.chat.completions.create(
    model="deepseek/deepseek-reasoner",
    messages=[{
        "role": "user",
        "content": f"Find and fix all bugs in this code. Explain each bug:\n{buggy_code}"
    }],
)
# R1 reasons through each line, identifies integer division bug,
# infinite loop from left = mid, and off-by-one in right boundary

DeepSeek R1 vs Claude Opus 4.6 for Reasoning

FeatureDeepSeek R1Claude Opus 4.6
Chain-of-thoughtVisible (in output)Internal (hidden)
MATH-50097.3%96.4%
AIME 202479.8%83.3%
Input cost/M$0.34$7.50
Output cost/M$0.50$37.50
Best forTransparent reasoning, mathComplex multi-step, nuance

When to Use R1 vs Claude Opus

  • Use R1 when you need visible reasoning, math/logic tasks, or budget-friendly reasoning ($0.34/M vs $7.50/M)
  • Use Claude Opus for complex multi-domain reasoning, creative problem-solving, or when raw accuracy is paramount
  • Use both: Draft with R1 (cheap), verify critical results with Opus

Try DeepSeek R1 with 50 free API calls at aipower.me. See the chain-of-thought reasoning for yourself.

Ready to try?

50 free API calls. 16 models. One API key.

Create free account