Qwen

Qwen

qwen3-coder

Qwen3 Coder

Qwen3 Coder is available through BatchIn's public live catalog.

Public model detailAvailableDense Transformer

Params

32B

Context

131K

Max Output

N/A

License

Apache-2.0

TTFT

220ms

Throughput

94 tok/s

Why pick it

  • Live backend route with verified pricing and billing.
  • Public catalog entry generated from the backend model list.

Pricing

TierPublicCachedPrice sourceNote
Realtime$0.67 / $0.67N/ABatchIn runtime catalogPublic price reflects the runtime catalog without claimed savings comparisons
Batch$0.50 / $0.50N/ABatchIn runtime catalogBatch public pricing follows the same runtime source
Live relay pricing pulled from the backend catalog.

Quick start

OpenAI-compatible surface. Swap the base URL and ship

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.luminapath.tech/v1",
    api_key="BATCHIN_API_KEY"
)

resp = client.chat.completions.create(
    model="qwen3-coder",
    messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload"}]
)

print(resp.choices[0].message.content)
JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.luminapath.tech/v1",
  apiKey: process.env.BATCHIN_API_KEY,
});

const resp = await client.chat.completions.create({
  model: "qwen3-coder",
  messages: [{ role: "user", content: "Summarize why this model is a fit for my workload" }],
});

console.log(resp.choices[0]?.message?.content);
cURL
curl https://api.luminapath.tech/v1/chat/completions \
  -H "Authorization: Bearer ***" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3-coder",
    "messages": [{"role":"user","content":"Summarize why this model is a fit for my workload"}]
  }'

Specs

Architecture

Dense Transformer

Vendor group

Qwen

Context window

131K

Max output

N/A

Best for

qwen
available

Related models

Back to model center