Inngest vs Trigger.dev: Which Background Job Engine Should You Use in 2026?
At some point in every SaaS product, you hit the same wall.
A user clicks a button to start campaign, process import, send batch emails, kick off AI pipeline and the request takes too long to finish inside a normal HTTP handler. You need to run that work somewhere else, track its progress, retry on failure, and not lose state if a server hiccups.
The old answer to this was: spin up a Redis instance, add BullMQ, write a worker process, deploy a separate container, wire up health checks, and manage it all forever.
The new answer is: use a modern background job engine. Specifically, either Inngest or Trigger.dev.
Both tools replace the Redis-plus-queue-plus-worker stack entirely. But they make different architectural bets, serve different deployment models, and shine in different scenarios. This guide explains both, shows you real setup code, and tells you which one to pick for your specific situation.
What Problem Are These Tools Actually Solving
Before comparing them, it helps to understand what a background job engine replaces.
In a traditional setup, running a long or complex background task requires:
| What You Needed | Why It Existed |
|---|---|
| Redis | Queue storage: hold jobs waiting to run |
| BullMQ / Bull | Queue library: enqueue, dequeue, retry logic |
| Worker process | Separate Node.js process that pulls and executes jobs |
| Separate Dockerfile | Container for the worker to run in |
| PM2 or supervisor | Keep the worker alive and restart on crash |
| Custom retry logic | Handle failures, dead letter queues, backoff |
| Dashboard tooling | Bull Board or Arena to visualize queue state |
That is a lot of infrastructure before you have written a single line of business logic.
Modern job engines collapse all of this into one: your job function lives in your existing codebase, you call a trigger function to enqueue it, and the platform handles storage, retry, scheduling, and observability. No Redis. No worker process. No separate container.
Inngest: The Serverless-First Background Engine
Inngest was built specifically for serverless and edge environments. It works by registering your job functions as HTTP endpoints that your existing Next.js API route or Express handler becomes the worker. Inngest calls it when a job needs to run.
How Inngest Works
- You define a function with
inngest.createFunction() - Inngest registers it via an API route (
/api/inngest) - When you want to run a job, you call
inngest.send()with an event - Inngest calls your endpoint with the job payload
- Retries, scheduling, and fan-out are all handled by Inngest's cloud
Your code never pulls from a queue. Inngest pushes to your endpoint. This is the key architectural difference that makes it work natively in Vercel, Railway, Fly.io, and any serverless environment.
Inngest Setup (Next.js App Router)
Step 1: Install
pnpm add inngestStep 2: Create the Inngest client
// lib/inngest/client.tsimport { Inngest } from "inngest";export const inngest = new Inngest({ id: "your-app-name",});Step 3: Define a background function
// lib/inngest/functions/send-campaign.tsimport { inngest } from "@/lib/inngest/client";import { resend } from "@/lib/resend";import { db } from "@/lib/db";export const sendEmailCampaign = inngest.createFunction( { id: "send-email-campaign", retries: 3, // Rate limit: max 10 concurrent executions concurrency: { limit: 10 }, }, { event: "campaign/send.requested" }, async ({ event, step }) => { const { campaignId, audienceIds } = event.data; // Step 1: Fetch campaign data const campaign = await step.run("fetch-campaign", async () => { return db.campaign.findUnique({ where: { id: campaignId } }); }); // Step 2: Send to each recipient (fan-out) const results = await step.run("send-emails", async () => { const sends = audienceIds.map((recipientId: string) => resend.emails.send({ from: "team@yourapp.com", to: recipientId, subject: campaign.subject, html: campaign.htmlBody, }) ); return Promise.allSettled(sends); }); // Step 3: Update campaign status await step.run("update-status", async () => { const sent = results.filter((r) => r.status === "fulfilled").length; const failed = results.filter((r) => r.status === "rejected").length; await db.campaign.update({ where: { id: campaignId }, data: { status: "completed", sentCount: sent, failedCount: failed }, }); }); return { campaignId, sent: results.length }; });Step 4: Register the API route
// app/api/inngest/route.tsimport { serve } from "inngest/next";import { inngest } from "@/lib/inngest/client";import { sendEmailCampaign } from "@/lib/inngest/functions/send-campaign";export const { GET, POST, PUT } = serve({ client: inngest, functions: [sendEmailCampaign],});Step 5: Trigger the job from anywhere in your app
// app/api/campaigns/[id]/send/route.tsimport { inngest } from "@/lib/inngest/client";import { NextResponse } from "next/server";export async function POST( req: Request, { params }: { params: { id: string } }) { const { audienceIds } = await req.json(); await inngest.send({ name: "campaign/send.requested", data: { campaignId: params.id, audienceIds, }, }); return NextResponse.json({ queued: true });}That is the full integration. No Redis. No worker process. The campaign runs in the background, retries on failure, and Inngest's dashboard shows you every step.
Trigger.dev: The Container-Native Job Platform
Trigger.dev takes a different approach. Instead of receiving HTTP calls, it runs a persistent worker process that connects to Trigger.dev's cloud over a long-lived connection.
Your jobs run inside your own infrastructure, your Docker container, your VPS, your Kubernetes cluster. Trigger.dev orchestrates scheduling, retry, and monitoring, but the actual execution happens on your machines.
This model has two significant advantages over Inngest's HTTP-push approach:
- No cold start: worker is always running, no serverless latency
- AI agent streaming : long-running streaming LLM calls work naturally in a persistent process, whereas serverless functions have timeout limits
How Trigger.dev Works
- You run the Trigger.dev worker SDK in a separate process (or container)
- The worker connects to Trigger.dev's cloud and registers your job functions
- When you trigger a job, Trigger.dev instructs the worker to execute it
- The worker runs the code and streams logs/status back to Trigger.dev
Trigger.dev Setup
Step 1: Install
pnpm add @trigger.dev/sdk@v3Step 2: Create a task
// trigger/send-campaign.tsimport { task, logger } from "@trigger.dev/sdk/v3";import { resend } from "@/lib/resend";import { db } from "@/lib/db";export const sendEmailCampaign = task({ id: "send-email-campaign", // Max duration: 5 minutes (no serverless timeout) maxDuration: 300, retry: { maxAttempts: 3, factor: 2, minTimeoutInMs: 1000, }, run: async (payload: { campaignId: string; audienceIds: string[] }) => { const { campaignId, audienceIds } = payload; logger.info("Starting campaign send", { campaignId }); const campaign = await db.campaign.findUnique({ where: { id: campaignId }, }); if (!campaign) throw new Error("Campaign not found"); let sent = 0; let failed = 0; for (const recipientId of audienceIds) { try { await resend.emails.send({ from: "team@yourapp.com", to: recipientId, subject: campaign.subject, html: campaign.htmlBody, }); sent++; } catch (err) { logger.error("Failed to send to recipient", { recipientId, err }); failed++; } } await db.campaign.update({ where: { id: campaignId }, data: { status: "completed", sentCount: sent, failedCount: failed }, }); return { campaignId, sent, failed }; },});Step 3: Trigger from your API route
// app/api/campaigns/[id]/send/route.tsimport { tasks } from "@trigger.dev/sdk/v3";import { sendEmailCampaign } from "@/trigger/send-campaign";import { NextResponse } from "next/server";export async function POST( req: Request, { params }: { params: { id: string } }) { const { audienceIds } = await req.json(); const handle = await tasks.trigger<typeof sendEmailCampaign>( "send-email-campaign", { campaignId: params.id, audienceIds, } ); return NextResponse.json({ runId: handle.id });}Step 4: Run the worker
# Developmentnpx trigger.dev@latest dev# Production (Dockerfile)npx trigger.dev@latest deploySide-by-Side Comparison
| Inngest | Trigger.dev | |
|---|---|---|
| Execution model | HTTP push to your endpoint | Persistent worker process |
| Vercel / serverless | Native fit, no extras needed | Needs separate worker container |
| Cold start | Possible (serverless) | None (always-running worker) |
| Long-running jobs | Limited by serverless timeout | Up to hours (configurable) |
| AI agent / streaming | Limited | Native support |
| Step-level retries | Yes (core feature) | Yes |
| Concurrency control | Built-in | Built-in |
| Scheduling (cron) | Yes | Yes |
| Fan-out / batch | Yes | Yes |
| Self-hosting | Not available | Yes (open-source) |
| Free tier | 50k function runs/month | 250k task runs/month |
| Dashboard | Cloud dashboard | Cloud dashboard + self-hosted |
| Setup complexity | Low (one API route) | Medium (worker process required) |
What You Are NOT Adding (And Why That Is the Right Call)
This section addresses a common question: why not just use the traditional stack?
No Redis
Both Inngest and Trigger.dev replace the need for a queue store entirely. Redis is excellent software, but managing a Redis instance is even a managed one at that. It adds cost, a connection to secure, and one more thing that can go down. If your only reason for Redis is queueing, these tools eliminate that dependency.
No BullMQ
BullMQ is a well-designed library, but it is low-level. You write retry logic, dead letter handling, concurrency limits, and job event listeners yourself. Inngest and Trigger.dev give you all of that as configuration, not code. The difference at a product team moving fast is meaningful time: you spend the time on features, not infrastructure.
No Worker Process (with Inngest)
With Inngest specifically, there is no separate worker to run. Your Next.js API route IS the worker. This means no separate Dockerfile, no PM2 config, no npm run worker process to keep alive. One deployment, one codebase.
With Trigger.dev, you do need a worker, but that worker is just running trigger dev or trigger deploy — there is no Redis dependency, no custom retry logic, and no BullMQ configuration.
No Socket.io or WebSockets (for Most Cases)
For tracking the status of a background job like campaign sending progress, import status or AI agent output, you do not need WebSockets. Server-sent events or simple polling with React Query every 2-3 seconds is completely sufficient and far simpler to implement.
// Simple polling with TanStack Queryexport function useCampaignStatus(campaignId: string) { return useQuery({ queryKey: ["campaign", campaignId, "status"], queryFn: () => fetch(`/api/campaigns/${campaignId}/status`).then((r) => r.json()), // Poll every 3 seconds while campaign is running refetchInterval: (query) => { const status = query.state.data?.status; return status === "running" ? 3000 : false; }, });}This covers 95% of background job status UX patterns. Save WebSockets for when you have a specific, real-time requirement like real-time collaborative editing, live chat, streaming AI output or not just a progress indicator.
No Trigger.dev at the Start (If You Are on Vercel)
Trigger.dev is a genuinely good tool, but it requires a persistent worker process. On Vercel's serverless model, that means running a separate container or service alongside your deployment. At early stage, that is unnecessary complexity.
Start with Inngest. It fits Vercel natively. If you later move off Vercel to a containerized deployment, or if you need AI agent streaming with long-running LLM calls, revisit Trigger.dev. The migration path is not painful and your business logic stays the same, only the trigger/step wrapper changes.
Decision Guide: Inngest or Trigger.dev?
Pick Inngest if:
- You are deployed on Vercel, Netlify, or any serverless platform
- Your jobs complete within serverless timeout limits (10 minutes on Vercel Pro)
- You want zero infrastructure overhead for one API route and done
- You are at early/mid stage and want to ship fast
- Your jobs are discrete steps, not streaming processes
Pick Trigger.dev if:
- You run your own containers, VPS, or Kubernetes cluster
- You need jobs to run for more than 10-15 minutes without worrying about timeouts
- You are building AI agents with streaming LLM responses
- You want self-hosting for one API route and done. Trigger.dev is open-source and can run on your infrastructure
- You need jobs with real-time log streaming during execution
Monitoring and Observability
Both platforms ship dashboards that show you every job run, its steps, logs, and retry history without any setup.
With Inngest, you get this out of the box at app.inngest.com. Every function run, every step, every retry is visible. You can replay failed runs directly from the dashboard.
With Trigger.dev, the dashboard at cloud.trigger.dev shows real-time logs streaming from your worker, per-task run history, and retry state. The local dev experience (trigger dev) also opens a live log view in your terminal.
Neither requires you to set up Grafana, write custom logging middleware, or configure alerting from scratch. The observability is included.
Inngest vs Trigger.dev - Frequently Asked Questions
Summary
Both Inngest and Trigger.dev solve the same core problem: running background jobs reliably without managing Redis, BullMQ, and worker processes yourself. The choice between them comes down to your deployment model and how long your jobs run.
- On Vercel or serverless: Inngest is the clear choice. One API route, zero infrastructure, native fit.
- On containers or VMs: Trigger.dev gives you more control, longer runtimes, and self-hosting.
- Building AI agents with streaming: Trigger.dev handles this better than serverless-constrained Inngest.
- Starting out: Inngest wins on simplicity. You can always migrate later.
The bigger win is the same regardless of which you pick: you remove Redis, BullMQ, and a worker process from your stack entirely. At early-to-mid stage, that infrastructure simplification is worth more than any marginal feature difference between the two platforms.
Building a SaaS product and need help designing your background job architecture? Websyro Agency helps product teams make the right infrastructure decisions before they become expensive ones. Talk to us for the first consultation is free.
