Qwen

Alibaba

qwen3.5-35b

Qwen3.5-35B-A3B

Lower-cost MoE Qwen route for product copilots and high-volume assistant traffic.

Public model detailMoE Transformer

Params

35B / 3B active

Context

256K

Max Output

32K

License

Apache 2.0

TTFT

220ms

Throughput

94 tok/s

Why pick it

  • Good step-down from 122B and 397B
  • Long context at a smaller serving footprint

Pricing

TierStandardCachedSiliconFlowSavings
Realtime$0.06 / $0.45$0.021N/AN/A
Batch$0.03 / $0.23$0.021N/AN/A

Quick start

OpenAI-compatible surface. Swap the base URL and ship.

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.luminapath.tech/v1",
    api_key="BATCHIN_API_KEY"
)

resp = client.chat.completions.create(
    model="qwen3.5-35b",
    messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload."}]
)

print(resp.choices[0].message.content)
JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.luminapath.tech/v1",
  apiKey: process.env.BATCHIN_API_KEY,
});

const resp = await client.chat.completions.create({
  model: "qwen3.5-35b",
  messages: [{ role: "user", content: "Summarize why this model is a fit for my workload." }],
});

console.log(resp.choices[0]?.message?.content);
cURL
curl https://api.luminapath.tech/v1/chat/completions \
  -H "Authorization: Bearer $BATCHIN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3.5-35b",
    "messages": [{"role":"user","content":"Summarize why this model is a fit for my workload."}]
  }'

Specs

Architecture

MoE Transformer

Vendor group

Qwen

Context window

256K

Max output

32K

Best for

qwen
moe

Related models

Back to model center
Qwen

Alibaba

qwen3.5-27b

Qwen3.5-27B

Lean Qwen route aimed at lower-cost chat, agent routing, and product copilot features.

View detail
Qwen

Alibaba

qwen3.5-122b

Qwen3.5-122B-A10B

Balanced Qwen MoE for long-context assistants and cost-conscious production routing.

View detail
Qwen

Alibaba

qwen3-32b

Qwen3-32B

Balanced mid-large Qwen route for general chat, coding, and production assistant workloads.

View detail
Z.ai

Z.ai

glm-5.1

GLM-5.1

Open-source coding flagship built for long-horizon autonomous engineering and deep reasoning.

View detail