Public templates

Start with a real workload, then refine it in the public Playground

Each template comes with a copy-paste starter snippet plus a prefilled public Playground prompt so you can test the workload before you create an account or move traffic onto the live gateway.

JavaScriptSuggested model: GLM-5.1

Customer support chatbot

Route product questions into a short answer, a confidence label, and a clear next action.

Best when you need an operator-friendly answer shape instead of free-form chat.

Playground starter prompt

A customer asks: "Our sync job is delayed by 17 minutes. What should we check first?" Reply in JSON with answer, confidence, and next_action.

JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.BATCHIN_API_KEY,
  baseURL: "https://api.luminapath.tech/v1",
});

const response = await client.chat.completions.create({
  model: "glm-5.1",
  messages: [
    {
      role: "system",
      content: "You are a support copilot. Return JSON with answer, confidence, and next_action."
    },
    {
      role: "user",
      content: "Our sync job is delayed by 17 minutes. What should we check first?"
    }
  ]
});

console.log(response.choices[0].message.content);
PythonSuggested model: DeepSeek V3.2

Document summary

Turn a long note, transcript, or PDF extract into a short brief with owners and follow-ups.

Useful when you want a repeatable summary format before moving the job into batch.

Playground starter prompt

Summarize this meeting into three sections: decisions, owners, and open risks. Keep it under 120 words.

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.luminapath.tech/v1",
)

response = client.chat.completions.create(
    model="deepseek-v3.2",
    messages=[
        {
            "role": "system",
            "content": "Summarize into decisions, owners, and open risks."
        },
        {
            "role": "user",
            "content": "Paste the transcript or document extract here."
        },
    ],
)

print(response.choices[0].message.content)
cURLSuggested model: DeepSeek R1

Code review

Review a patch for bugs, regressions, security issues, and missing tests before it lands.

Good for teams that want a consistent review rubric instead of vague “looks good” feedback.

Playground starter prompt

Review this patch for bugs, regressions, security issues, and missing tests. Prioritize the most severe findings first.

cURL
curl https://api.luminapath.tech/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-r1",
    "messages": [
      {
        "role": "system",
        "content": "Review code for bugs, regressions, security issues, and missing tests."
      },
      {
        "role": "user",
        "content": "Paste the diff or code sample here."
      }
    ]
  }'
JavaScriptSuggested model: Qwen3.5-397B-A17B

Translation workflow

Translate product or support text while preserving tone, terminology, and output structure.

Useful when bilingual teams need one prompt that keeps brand language steady.

Playground starter prompt

Translate this support reply into Japanese. Preserve bullet structure and keep the tone calm and direct.

JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.BATCHIN_API_KEY,
  baseURL: "https://api.luminapath.tech/v1",
});

const response = await client.chat.completions.create({
  model: "qwen3.5-397b",
  messages: [
    {
      role: "system",
      content: "Translate accurately while preserving tone, structure, and terminology."
    },
    {
      role: "user",
      content: "Translate this support reply into English."
    }
  ]
});

console.log(response.choices[0].message.content);
PythonSuggested model: Qwen3-32B

Embeddings and retrieval

Generate embeddings for your corpus, then keep retrieval metadata consistent before you wire in a full RAG stack.

The code path uses the embeddings endpoint; the Playground link lets you test the answer format you want on top of retrieved chunks.

Playground starter prompt

You have three retrieved chunks about an outage. Draft a final answer that cites chunk IDs and lists missing evidence before escalation.

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.luminapath.tech/v1",
)

response = client.embeddings.create(
    model="bge-m3",
    input=[
        "Batch job failed after the queue delay exceeded 90 seconds.",
        "Customer traffic moved back to the shared lane during rollback."
    ],
)

print(len(response.data))
print(response.data[0].embedding[:8])
AI Assistant