Batch Processing

Queue millions of requests with visible priority tiers and webhook-ready delivery.

BatchIn’s batch path is built for document pipelines, offline enrichment, and long-running inference jobs where price and throughput matter more than interactive latency.

Max tasks

10,000 / batch

Fill tier

Best spare-capacity rate

Retention

72h results window

Delivery

Webhook supported

A pricing model operators can reason about

Priority is not hidden behind a scheduler. You choose the tradeoff between urgency and price before the job enters the queue.

  • High priority for the shortest queue path.
  • Low priority for discounted background throughput.
  • Fill priority for the cheapest spare-capacity work when urgency is minimal.

Built for pipeline integration

The batch surface matches the rest of the platform, so you do not need a second ingestion system just to handle offline jobs.

  • Mixed-model JSONL tasks for routing different records to different models.
  • Webhook callbacks plus downloadable results for downstream workers.
  • Audit and usage logging stay aligned with the online API.