Why pick it
- Low cost baseline model
- Good for high-volume APIs
OpenAI OSS
gpt-oss-20b
Compact OpenAI open-weight option for fast chat, routing, and lower-cost product features.
Params
20B
Context
131K
Max Output
16K
License
Apache 2.0
TTFT
160ms
Throughput
120 tok/s
Why pick it
Pricing
Quick start
OpenAI-compatible surface. Swap the base URL and ship.
from openai import OpenAI
client = OpenAI(
base_url="https://api.luminapath.tech/v1",
api_key="BATCHIN_API_KEY"
)
resp = client.chat.completions.create(
model="gpt-oss-20b",
messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload."}]
)
print(resp.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.luminapath.tech/v1",
apiKey: process.env.BATCHIN_API_KEY,
});
const resp = await client.chat.completions.create({
model: "gpt-oss-20b",
messages: [{ role: "user", content: "Summarize why this model is a fit for my workload." }],
});
console.log(resp.choices[0]?.message?.content);curl https://api.luminapath.tech/v1/chat/completions \
-H "Authorization: Bearer $BATCHIN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-oss-20b",
"messages": [{"role":"user","content":"Summarize why this model is a fit for my workload."}]
}'Specs
Architecture
Dense Transformer
Vendor group
OpenAI
Context window
131K
Max output
16K
Best for
Related models
Back to model centerOpenAI OSS
gpt-oss-120b
OpenAI open-weight MoE with pragmatic pricing for general chat, agents, and product workflows.
View detailAlibaba
qwen3.5-9b
Compact long-context Qwen option for cost-sensitive API traffic and routing layers.
View detailStepFun
step-3.5-flash
High-traffic StepFun flash model tuned for cheap fast inference and agent loops.
View detailZ.ai
glm-5.1
Open-source coding flagship built for long-horizon autonomous engineering and deep reasoning.
View detail