DeepSeek

DeepSeek

deepseek-v3

DeepSeek V3

Stable general-purpose DeepSeek route for large-scale chat and batch workloads.

Public model detailDense Transformer

Params

671B

Context

160K

Max Output

64K

License

MIT

TTFT

160ms

Throughput

120 tok/s

Why pick it

  • Cheap enough for bulk inference
  • MIT licensed upstream

Pricing

TierStandardCachedSiliconFlowSavings
Realtime$0.08 / $0.28$0.028$0.27 / $1.0070%
Batch$0.04 / $0.14$0.028$0.27 / $1.0070%

Quick start

OpenAI-compatible surface. Swap the base URL and ship.

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.luminapath.tech/v1",
    api_key="BATCHIN_API_KEY"
)

resp = client.chat.completions.create(
    model="deepseek-v3",
    messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload."}]
)

print(resp.choices[0].message.content)
JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.luminapath.tech/v1",
  apiKey: process.env.BATCHIN_API_KEY,
});

const resp = await client.chat.completions.create({
  model: "deepseek-v3",
  messages: [{ role: "user", content: "Summarize why this model is a fit for my workload." }],
});

console.log(resp.choices[0]?.message?.content);
cURL
curl https://api.luminapath.tech/v1/chat/completions \
  -H "Authorization: Bearer $BATCHIN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-v3",
    "messages": [{"role":"user","content":"Summarize why this model is a fit for my workload."}]
  }'

Specs

Architecture

Dense Transformer

Vendor group

DeepSeek

Context window

160K

Max output

64K

Best for

chat
batch

Related models

Back to model center
DeepSeek

DeepSeek

deepseek-v3.2

DeepSeek V3.2

Flagship DeepSeek release tuned for strong general reasoning at a very aggressive price point.

View detail
DeepSeek

DeepSeek

deepseek-v3.1-terminus

DeepSeek V3.1 Terminus

Higher-output DeepSeek route for workflows that need longer structured completions.

View detail
BatchIn

StepFun

step-3.5-flash

Step-3.5-Flash

High-traffic StepFun flash model tuned for cheap fast inference and agent loops.

View detail
Z.ai

Z.ai

glm-5.1

GLM-5.1

Open-source coding flagship built for long-horizon autonomous engineering and deep reasoning.

View detail