Architecture & Design

The Outbox Pattern in Prisma: Reliable Events Without Kafka

By Technspire Team
February 3, 2026
8 views

Every service that must both persist state and publish an event faces the same dual-write problem: the database commit and the message publish are two separate operations, and either can fail without the other. The transactional outbox pattern solves this elegantly. And you do not need Kafka to implement it. A single table, Prisma transactions, and a tiny relay worker are enough for most B2B workloads.

The Problem in One Paragraph

You receive a request to create an order. You write the order row to the database and publish an OrderCreated event. If the database write succeeds but the broker publish fails, you have a phantom order. Persisted but invisible to downstream consumers. If the publish succeeds but the database commit fails, you have sent consumers an event describing an order that does not exist. Retries make it worse. Distributed transactions across the database and the broker are possible but expensive. The outbox pattern skips the problem entirely by making the publish itself part of the database transaction.

The Pattern

Instead of publishing to the broker from the application, write the event row into an outbox table in the same transaction as the business write. A separate relay worker polls the outbox, publishes messages to the real broker, and marks them sent. The application's transactional boundary now protects both the state change and the commitment to publish.

The Prisma Schema

// prisma/schema.prisma
model OutboxEvent {
  id         String   @id @default(cuid())
  aggregate  String                       // e.g. "Order"
  aggregateId String
  type       String                       // e.g. "OrderCreated"
  payload    String   @db.NText           // JSON
  createdAt  DateTime @default(now())
  sentAt     DateTime?
  attempts   Int      @default(0)
  lastError  String?  @db.NText

  @@index([sentAt, createdAt])            // relay query index
}

The Transactional Write

// Write business state + outbox in the same transaction
async function createOrder(input: CreateOrderInput) {
  return prisma.$transaction(async (tx) => {
    const order = await tx.order.create({ data: toOrderRow(input) });

    await tx.outboxEvent.create({
      data: {
        aggregate: 'Order',
        aggregateId: order.id,
        type: 'OrderCreated',
        payload: JSON.stringify({
          orderId: order.id,
          customerId: order.customerId,
          total: order.total,
        }),
      },
    });

    return order;
  });
}

Either both rows commit or neither does. There is no publish that can fail after the business row is written.

The Relay Worker

// workers/outbox-relay.ts
const BATCH = 50;

async function relayOnce() {
  const events = await prisma.outboxEvent.findMany({
    where: { sentAt: null },
    orderBy: { createdAt: 'asc' },
    take: BATCH,
  });

  for (const evt of events) {
    try {
      await broker.publish(evt.aggregate, evt.type, JSON.parse(evt.payload), {
        messageId: evt.id,                       // idempotency key for consumers
      });
      await prisma.outboxEvent.update({
        where: { id: evt.id },
        data: { sentAt: new Date() },
      });
    } catch (err: any) {
      await prisma.outboxEvent.update({
        where: { id: evt.id },
        data: { attempts: { increment: 1 }, lastError: err.message?.slice(0, 2000) },
      });
      // stop the batch on error — preserve order per aggregate
      break;
    }
  }
}

setInterval(() => { relayOnce().catch(console.error); }, 1000);

Ordering, Exactly-Once, and Other Promises

  • Per-aggregate ordering. The pattern preserves the order events were written. To guarantee per-aggregate order on the consumer side, partition the broker topic by aggregateId.
  • At-least-once delivery. A crash between publish and sentAt update results in a re-send on restart. Consumers must be idempotent. Use the messageId as an idempotency key.
  • Exactly-once processing (end-to-end) is achieved by the consumer tracking handled message IDs in its own database. The broker alone cannot guarantee this; the combination of outbox + idempotent consumer does.

Operational Concerns

  • Run a single relay instance per partition. Otherwise two workers can publish the same event before either marks it sent. Use a leader-election pattern or a single-instance deployment constraint.
  • Archive sent events. The outbox grows forever if you do not prune. A daily job moving sentAt IS NOT NULL rows older than 7 days to a cold archive keeps the working set small.
  • Monitor the lag. The oldest createdAt with sentAt IS NULL is your publish lag. Alert when it exceeds acceptable bounds.

Change Data Capture as the Alternative

At higher scale, polling becomes too chatty and CDC (e.g. Debezium on Azure SQL) tailing the transaction log is the upgrade path. The outbox table stays; the poll is replaced by a log reader. For most B2B applications with event rates under a few hundred per second, polling is simpler and entirely sufficient.

When to Reach for Kafka Instead

Kafka earns its complexity at high fan-out, multi-consumer replay, and cross-team event infrastructure. For a single service that needs reliable event publishing, the outbox is shorter, simpler, and easier to reason about. Start with the outbox; graduate if scale demands it.

Ready to Transform Your Business?

Let's discuss how we can help you implement these solutions and achieve your goals with AI, cloud, and modern development practices.

No commitment required • Expert guidance • Tailored solutions