Z.ai

Z.ai

glm-5.1

GLM-5.1

Open-source coding flagship built for long-horizon autonomous engineering and deep reasoning.

SWE-Bench Pro #1Public model detailMoE Transformer

Params

754B MoE

Context

198K

Max Output

128K

License

MIT

TTFT

520ms

Throughput

42 tok/s

Why pick it

  • 8-hour autonomous coding loops
  • 50%+ cheaper than SiliconFlow

Pricing

TierStandardCachedSiliconFlowSavings
Realtime$0.50 / $1.50$0.175$1.40 / $4.4064%
Batch$0.25 / $0.75$0.175$1.40 / $4.4064%

Quick start

OpenAI-compatible surface. Swap the base URL and ship.

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.luminapath.tech/v1",
    api_key="BATCHIN_API_KEY"
)

resp = client.chat.completions.create(
    model="glm-5.1",
    messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload."}]
)

print(resp.choices[0].message.content)
JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.luminapath.tech/v1",
  apiKey: process.env.BATCHIN_API_KEY,
});

const resp = await client.chat.completions.create({
  model: "glm-5.1",
  messages: [{ role: "user", content: "Summarize why this model is a fit for my workload." }],
});

console.log(resp.choices[0]?.message?.content);
cURL
curl https://api.luminapath.tech/v1/chat/completions \
  -H "Authorization: Bearer $BATCHIN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "glm-5.1",
    "messages": [{"role":"user","content":"Summarize why this model is a fit for my workload."}]
  }'

Specs

Architecture

MoE Transformer

Vendor group

Z.ai

Context window

198K

Max output

128K

Best for

reasoning
coding
featured

Related models

Back to model center
Z.ai

Z.ai

glm-5

GLM-5

Lower-cost GLM route for production reasoning, agents, and long-context workflows.

View detail
DeepSeek

DeepSeek

deepseek-r1

DeepSeek R1

Heavy reasoning model for difficult planning, math, research, and multi-step analysis.

View detail
Qwen

Alibaba

qwen3.5-397b

Qwen3.5-397B-A17B

Top-tier Qwen MoE model for multilingual reasoning, coding, and large-context assistants.

View detail
BatchIn

Mistral

devstral-2

Devstral 2

Coding-oriented Mistral route for engineering copilots, refactors, and repo-level workflows.

View detail