← All Posts
High Level Design Series · Architecture Patterns · Part 3· Post 39 of 70

CQRS & Event Sourcing

The Fundamental Problem

Traditional CRUD architectures use a single model for both reading and writing data. This creates a painful mismatch:

ConcernWrite Side NeedsRead Side Needs
Data shapeNormalized, enforces invariantsDenormalized, pre-joined for speed
ScalingConsistent, serialized, often single-leaderMassively parallel, cacheable, many replicas
Throughput ratioTypically 1–10% of total opsTypically 90–99% of total ops
OptimizationValidation, domain rules, concurrency controlIndexes, materialized views, caching layers
HistoryOnly current stateOften needs audit trail, time-travel queries

When you force a single model to serve both, you get impedance mismatch: writes become slow because of read indexes, reads become slow because of normalization, and neither side can scale independently. CQRS and Event Sourcing solve this by splitting the model and changing what you store.

CQRS — Command Query Responsibility Segregation

CQRS is a pattern where the command model (writes) and the query model (reads) are completely separated — different data models, different services, often different databases.

Core Principles

Bertrand Meyer's CQS principle (1988) says: "Every method should either be a command that performs an action, or a query that returns data, but not both." CQRS extends this idea to the architectural level — separate entire subsystems.

Architecture Breakdown

┌──────────────────────────── WRITE SIDE ────────────────────────────┐
│                                                                     │
│  Client ──→ API Gateway ──→ Command Handler ──→ Domain Model        │
│                                                     │               │
│                                                     ▼               │
│                                              Write Database         │
│                                              (normalized, ACID)     │
│                                                     │               │
│                                                     ▼               │
│                                              Domain Events          │
│                                              published to bus       │
└─────────────────────────────────────────────────────────────────────┘
                                │
                    ┌───────────▼───────────┐
                    │     Message Bus /     │
                    │    Event Broker       │
                    │  (Kafka, RabbitMQ)    │
                    └───────────┬───────────┘
                                │
┌───────────────────────────────▼─── READ SIDE ──────────────────────┐
│                                                                     │
│  Event Handler / Projector ──→ Read Database(s)                     │
│                                (denormalized, pre-joined,           │
│                                 Elasticsearch, Redis, etc.)         │
│                                       │                             │
│                                       ▼                             │
│                          Query API ──→ Client                       │
└─────────────────────────────────────────────────────────────────────┘

The Command Flow in Detail

// 1. Client sends a command
POST /api/orders
{
  "type": "PlaceOrder",
  "customerId": "cust-42",
  "items": [
    { "productId": "prod-101", "quantity": 2, "price": 29.99 },
    { "productId": "prod-205", "quantity": 1, "price": 49.99 }
  ],
  "shippingAddress": { "city": "Seattle", "zip": "98101" }
}

// 2. Command handler validates & delegates to domain
class PlaceOrderHandler {
  async handle(cmd: PlaceOrder): Promise<void> {
    // Load aggregate from write store
    const customer = await this.customerRepo.load(cmd.customerId);
    const inventory = await this.inventoryRepo.checkAvailability(cmd.items);

    // Domain validation
    if (!customer.isActive) throw new BusinessRuleViolation("Inactive customer");
    if (!inventory.allAvailable) throw new BusinessRuleViolation("Out of stock");
    if (customer.outstandingBalance > customer.creditLimit)
      throw new BusinessRuleViolation("Credit limit exceeded");

    // Create order aggregate — this produces domain events
    const order = Order.create({
      customerId: cmd.customerId,
      items: cmd.items,
      shippingAddress: cmd.shippingAddress,
    });

    // Persist and publish events atomically
    await this.orderRepo.save(order);          // writes to DB
    await this.eventBus.publish(order.events); // OrderPlaced, InventoryReserved
  }
}

// 3. Domain events produced
{
  "type": "OrderPlaced",
  "orderId": "ord-9001",
  "customerId": "cust-42",
  "items": [...],
  "totalAmount": 109.97,
  "timestamp": "2026-04-15T10:23:45Z",
  "version": 1
}

The Query Flow in Detail

// Query reads from the optimized read model — no joins, no domain logic
GET /api/orders/ord-9001

// Read model (stored in Elasticsearch or Redis)
{
  "orderId": "ord-9001",
  "customerName": "Alice Johnson",       // pre-joined from customer data
  "customerEmail": "alice@example.com",   // pre-joined
  "status": "CONFIRMED",
  "items": [
    {
      "productName": "Wireless Mouse",   // pre-joined from product catalog
      "quantity": 2,
      "unitPrice": 29.99,
      "lineTotal": 59.98
    },
    {
      "productName": "USB-C Hub",
      "quantity": 1,
      "unitPrice": 49.99,
      "lineTotal": 49.99
    }
  ],
  "totalAmount": 109.97,
  "shippingAddress": "Seattle, WA 98101",
  "estimatedDelivery": "2026-04-20",      // pre-computed
  "placedAt": "2026-04-15T10:23:45Z"
}

// The read model is FLAT — no need to join orders + customers + products.
// It's shaped exactly for the UI. One query, one response.

The Projector — Building Read Models

A projector (also called an event handler or denormalizer) listens to domain events and updates the read model:

class OrderProjector {
  // Called when OrderPlaced event is received from the bus
  async onOrderPlaced(event: OrderPlacedEvent): Promise<void> {
    const customer = await this.customerReadStore.get(event.customerId);
    const products = await this.productReadStore.getMany(
      event.items.map(i => i.productId)
    );

    const readModel = {
      orderId: event.orderId,
      customerName: customer.name,
      customerEmail: customer.email,
      status: "CONFIRMED",
      items: event.items.map(item => ({
        productName: products[item.productId].name,
        quantity: item.quantity,
        unitPrice: item.price,
        lineTotal: item.quantity * item.price,
      })),
      totalAmount: event.totalAmount,
      shippingAddress: formatAddress(event.shippingAddress),
      estimatedDelivery: computeDeliveryDate(event.shippingAddress),
      placedAt: event.timestamp,
    };

    // Upsert into read store (Elasticsearch, DynamoDB, Redis, etc.)
    await this.readStore.upsert(`order:${event.orderId}`, readModel);
  }

  async onOrderShipped(event: OrderShippedEvent): Promise<void> {
    await this.readStore.update(`order:${event.orderId}`, {
      status: "SHIPPED",
      trackingNumber: event.trackingNumber,
      shippedAt: event.timestamp,
    });
  }

  async onOrderCancelled(event: OrderCancelledEvent): Promise<void> {
    await this.readStore.update(`order:${event.orderId}`, {
      status: "CANCELLED",
      cancelledAt: event.timestamp,
      cancellationReason: event.reason,
    });
  }
}

Interactive: CQRS Flow

Step through the full CQRS lifecycle — from command submission to query response:

Event Sourcing — Store Events, Not State

Instead of storing the current state of an entity, Event Sourcing stores every state change as an immutable event. The current state is derived by replaying all events from the beginning (or from a snapshot).

CRUD vs Event Sourcing

AspectCRUDEvent Sourcing
What's storedCurrent state (latest row)Sequence of all events
UPDATE operationOverwrites previous valueAppends new event (old events untouched)
HistoryLost unless you add audit tablesComplete history by default
Current stateDirect read from DBReplay events (or read snapshot)
Storage costO(entities)O(events) — grows over time
Debugging"What happened?" — unknownReplay the exact sequence

Event Store Schema

An event store is an append-only database optimized for writing and reading event streams. Here is a concrete schema:

-- Core event store table
CREATE TABLE events (
    -- Global ordering: monotonically increasing, no gaps
    global_position   BIGSERIAL PRIMARY KEY,

    -- Stream identification (e.g., "BankAccount-acct-42")
    stream_id         VARCHAR(255) NOT NULL,

    -- Position within the stream (0, 1, 2, ...)
    stream_position   INTEGER NOT NULL,

    -- Event payload
    event_type        VARCHAR(255) NOT NULL,    -- e.g., "MoneyDeposited"
    data              JSONB NOT NULL,           -- event payload
    metadata          JSONB DEFAULT '{}',       -- correlation ID, causation ID, user

    -- Timestamps
    created_at        TIMESTAMPTZ NOT NULL DEFAULT NOW(),

    -- Optimistic concurrency: unique per stream
    UNIQUE (stream_id, stream_position)
);

-- Index for reading a full stream in order
CREATE INDEX idx_events_stream ON events (stream_id, stream_position);

-- Index for global ordering (for projections that process all events)
CREATE INDEX idx_events_global ON events (global_position);

-- Snapshot table for performance
CREATE TABLE snapshots (
    stream_id         VARCHAR(255) PRIMARY KEY,
    stream_position   INTEGER NOT NULL,          -- event position of snapshot
    data              JSONB NOT NULL,             -- serialized aggregate state
    created_at        TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Subscriptions / checkpoints for projectors
CREATE TABLE projector_checkpoints (
    projector_name    VARCHAR(255) PRIMARY KEY,
    last_position     BIGINT NOT NULL DEFAULT 0, -- global_position last processed
    updated_at        TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

Appending Events (Write Path)

-- Appending an event with optimistic concurrency control
-- The UNIQUE constraint on (stream_id, stream_position) prevents conflicts.
-- If two writers try to append at the same position, one fails → retry.

INSERT INTO events (stream_id, stream_position, event_type, data, metadata)
VALUES (
  'BankAccount-acct-42',          -- stream_id
  3,                               -- expected next position
  'MoneyDeposited',                -- event type
  '{"amount": 100.00, "currency": "USD", "description": "Payroll"}',
  '{"correlationId": "tx-7891", "userId": "user-5", "timestamp": "2026-04-15T10:30:00Z"}'
);

-- If stream_position 3 already exists → UNIQUE violation → concurrency conflict!

Loading an Aggregate (Replay)

class BankAccount {
  id: string;
  balance: number = 0;
  status: string = "UNKNOWN";
  version: number = -1;

  // Rebuild state by applying each event in order
  static fromEvents(events: DomainEvent[]): BankAccount {
    const account = new BankAccount();
    for (const event of events) {
      account.apply(event);
      account.version = event.streamPosition;
    }
    return account;
  }

  // Each event type has a specific state transition
  private apply(event: DomainEvent): void {
    switch (event.type) {
      case "AccountCreated":
        this.id = event.data.accountId;
        this.balance = 0;
        this.status = "ACTIVE";
        break;
      case "MoneyDeposited":
        this.balance += event.data.amount;
        break;
      case "MoneyWithdrawn":
        this.balance -= event.data.amount;
        break;
      case "AccountClosed":
        this.status = "CLOSED";
        break;
    }
  }
}

// Load from event store
async function loadAccount(accountId: string): Promise<BankAccount> {
  const streamId = `BankAccount-${accountId}`;

  // Check for snapshot first
  const snapshot = await db.query(
    `SELECT * FROM snapshots WHERE stream_id = $1`, [streamId]
  );

  let events: DomainEvent[];
  let account: BankAccount;

  if (snapshot.rows.length > 0) {
    // Resume from snapshot
    account = BankAccount.fromSnapshot(snapshot.rows[0].data);
    account.version = snapshot.rows[0].stream_position;

    // Load only events AFTER the snapshot
    events = await db.query(
      `SELECT * FROM events
       WHERE stream_id = $1 AND stream_position > $2
       ORDER BY stream_position`,
      [streamId, snapshot.rows[0].stream_position]
    );
  } else {
    // No snapshot — replay all events
    events = await db.query(
      `SELECT * FROM events
       WHERE stream_id = $1
       ORDER BY stream_position`,
      [streamId]
    );
    account = new BankAccount();
  }

  for (const event of events.rows) {
    account.apply(event);
    account.version = event.stream_position;
  }

  return account;
}

Snapshots for Performance

Replaying thousands of events for every command is expensive. Snapshots periodically capture the aggregate state:

// Take a snapshot every N events
async function saveWithSnapshot(
  account: BankAccount,
  newEvents: DomainEvent[],
  snapshotInterval: number = 100
): Promise<void> {
  const streamId = `BankAccount-${account.id}`;

  // Append new events
  for (const event of newEvents) {
    await appendEvent(streamId, event);
  }

  // Check if we should create a snapshot
  if (account.version % snapshotInterval === 0) {
    await db.query(
      `INSERT INTO snapshots (stream_id, stream_position, data)
       VALUES ($1, $2, $3)
       ON CONFLICT (stream_id) DO UPDATE SET
         stream_position = $2, data = $3, created_at = NOW()`,
      [streamId, account.version, JSON.stringify(account.toSnapshot())]
    );
  }
}

// Snapshot payload
{
  "accountId": "acct-42",
  "balance": 5320.50,
  "status": "ACTIVE",
  "snapshotPosition": 200,      // event # this snapshot represents
  "takenAt": "2026-04-15T12:00:00Z"
}

// Without snapshot: replay events 0..250 (250 queries + computations)
// With snapshot:    load snapshot at 200, replay events 201..250 (50 computations)
Snapshot strategies:
  • Every N events — simplest; snapshot at positions 100, 200, 300…
  • Time-based — snapshot if aggregate hasn't been snapshotted in the last hour
  • On read — when loading exceeds a threshold number of events, create a snapshot for next time
  • Background process — periodically scan for aggregates that need new snapshots

Interactive: Event Sourcing Replay

Watch how state is rebuilt by replaying events, and how snapshots accelerate the process:

CQRS + Event Sourcing Together

CQRS and Event Sourcing are independent patterns, but they are natural partners:

┌─── COMMAND SIDE ───────────────────────────────────────────────────┐
│                                                                     │
│  Command ──→ Command Handler ──→ Load Aggregate (replay events)     │
│                                       │                             │
│                                  Domain Logic                       │
│                                       │                             │
│                                  New Events ──→ Event Store         │
│                                                  (append-only)      │
└────────────────────────────────────────┬────────────────────────────┘
                                         │
                              Events published to bus
                                         │
┌────────────────────────────────────────▼──── QUERY SIDE ───────────┐
│                                                                     │
│  Projector A ──→ Elasticsearch (full-text search)                   │
│  Projector B ──→ Redis (dashboard counters)                         │
│  Projector C ──→ PostgreSQL (relational queries)                    │
│  Projector D ──→ Neo4j (graph queries)                              │
│                                                                     │
│  Query API reads from the appropriate read store                    │
└─────────────────────────────────────────────────────────────────────┘

The event store serves as both the write-side persistence and the source for read-side projections. Multiple projectors can consume the same event stream and build completely different read models — one for search, one for analytics, one for the dashboard.

Full End-to-End Example: Bank Transfer

// 1. COMMAND: TransferMoney
{
  "type": "TransferMoney",
  "fromAccountId": "acct-42",
  "toAccountId": "acct-99",
  "amount": 500.00,
  "description": "Rent payment"
}

// 2. COMMAND HANDLER
class TransferMoneyHandler {
  async handle(cmd: TransferMoney): Promise<void> {
    // Load both aggregates by replaying their events
    const fromAccount = await this.repo.load(cmd.fromAccountId);
    const toAccount = await this.repo.load(cmd.toAccountId);

    // Domain validation
    fromAccount.withdraw(cmd.amount, cmd.description);  // throws if insufficient funds
    toAccount.deposit(cmd.amount, `Transfer from ${cmd.fromAccountId}`);

    // Save both — each appends events to their stream
    await this.repo.save(fromAccount);  // MoneyWithdrawn event
    await this.repo.save(toAccount);    // MoneyDeposited event

    // Publish: TransferCompleted event for saga/process manager
  }
}

// 3. EVENTS STORED (immutable, append-only)
// Stream: BankAccount-acct-42
{ "type": "MoneyWithdrawn", "amount": 500.00, "description": "Rent payment",
  "balance_after": 4820.50, "stream_position": 203 }

// Stream: BankAccount-acct-99
{ "type": "MoneyDeposited", "amount": 500.00, "description": "Transfer from acct-42",
  "balance_after": 12500.00, "stream_position": 87 }

// 4. PROJECTORS update read models asynchronously
// → Account balance dashboard (Redis)
// → Transaction history (Elasticsearch)
// → Monthly statement (PostgreSQL)
// → Fraud detection (Kafka Streams)

// 5. QUERY: Get account summary
GET /api/accounts/acct-42/summary
{
  "accountId": "acct-42",
  "balance": 4820.50,
  "recentTransactions": [
    { "date": "2026-04-15", "desc": "Rent payment", "amount": -500.00 },
    { "date": "2026-04-14", "desc": "Payroll", "amount": 3200.00 },
    ...
  ]
}

Temporal Queries & Audit Trails

One of the most powerful benefits of Event Sourcing: you can answer "What was the state at any point in time?"

// Time-travel query: What was the balance on March 1st?
async function getBalanceAt(accountId: string, asOf: Date): Promise<number> {
  const streamId = `BankAccount-${accountId}`;

  // Load events up to (but not after) the given timestamp
  const events = await db.query(
    `SELECT * FROM events
     WHERE stream_id = $1 AND created_at <= $2
     ORDER BY stream_position`,
    [streamId, asOf]
  );

  const account = BankAccount.fromEvents(events.rows);
  return account.balance;
}

// Audit trail: Who did what, when?
SELECT
  event_type,
  data->>'amount' AS amount,
  data->>'description' AS description,
  metadata->>'userId' AS performed_by,
  metadata->>'correlationId' AS transaction_id,
  created_at
FROM events
WHERE stream_id = 'BankAccount-acct-42'
ORDER BY stream_position;

-- Result:
-- AccountCreated  | null  | null              | system  | tx-001 | 2025-01-15
-- MoneyDeposited  | 1000  | Initial deposit   | user-5  | tx-002 | 2025-01-15
-- MoneyDeposited  | 3200  | Payroll           | system  | tx-100 | 2025-02-01
-- MoneyWithdrawn  | 500   | Rent payment      | user-5  | tx-101 | 2025-02-05
-- ...every single change is recorded forever

Debugging by Replaying

When a bug produces incorrect state, you can replay the exact sequence of events to reproduce it:

// Replay to debug: "Why does account acct-42 have a negative balance?"
const events = await loadAllEvents("BankAccount-acct-42");
const account = new BankAccount();

for (const event of events) {
  console.log(`Event #${event.stream_position}: ${event.event_type}`);
  console.log(`  Before: balance=${account.balance}`);
  account.apply(event);
  console.log(`  After:  balance=${account.balance}`);

  if (account.balance < 0) {
    console.error(`BUG FOUND at event #${event.stream_position}!`);
    console.error(`Event data:`, event.data);
    console.error(`Metadata:`, event.metadata);
    break;
  }
}

// Output:
// Event #201: MoneyDeposited
//   Before: balance=200.00
//   After:  balance=3400.00
// Event #202: MoneyWithdrawn
//   Before: balance=3400.00
//   After:  balance=2900.00
// Event #203: MoneyWithdrawn   ← BUG: concurrent withdrawal not caught
//   Before: balance=2900.00
//   After:  balance=-100.00
// BUG FOUND at event #203!

Eventual Consistency & Its Challenges

In CQRS, the read model is eventually consistent with the write model. After a command succeeds, there is a propagation delay before the read model reflects the change.

Consistency Timeline

Time ──────────────────────────────────────────────────────────────▶

  T0: Client sends "Deposit $100" command
  T1: Write model processes command, appends event         [~5ms]
  T2: Event published to message bus                       [~10ms]
  T3: Projector receives event                             [~50ms]
  T4: Projector updates read model                         [~20ms]
  T5: Read model reflects new balance                      [total ~85ms]

  Problem: If client queries at T2 (before T5), they see STALE data.
  The deposit succeeded, but the balance hasn't updated yet.

Strategies for Handling Eventual Consistency

StrategyHow It WorksTrade-off
Read-your-writesAfter a command, route that user's next read to the write model (or wait for projection)Adds complexity to routing
Optimistic UIClient updates UI immediately on command success, before read model confirmsMay need rollback if command fails
Polling / SSEClient polls or subscribes to updates, refreshes when projection is readySlight delay visible to user
Version-basedCommand returns version number; client includes it in query; query waits until read model catches upIncreases query latency
Synchronous projectionUpdate read model in the same transaction as the write (defeats purpose of CQRS for scaling)Strong consistency but no independent scaling
// Version-based read-your-writes example
// Command response includes the version
POST /api/accounts/acct-42/deposit → { "version": 204 }

// Client passes version to query
GET /api/accounts/acct-42/summary?after_version=204

// Server waits (with timeout) for read model to reach version 204
async function queryWithConsistency(accountId, minVersion, timeout = 5000) {
  const start = Date.now();
  while (Date.now() - start < timeout) {
    const readModel = await readStore.get(`account:${accountId}`);
    if (readModel.version >= minVersion) return readModel;
    await sleep(50); // poll every 50ms
  }
  throw new Error("Read model consistency timeout");
}

Challenges & Complexity

Event Versioning & Schema Evolution

Events are stored forever. But your domain evolves. How do you handle schema changes?

// Version 1 — original event
{ "type": "CustomerRegistered", "version": 1,
  "data": { "name": "Alice Johnson", "email": "alice@example.com" } }

// Version 2 — name split into first/last
{ "type": "CustomerRegistered", "version": 2,
  "data": { "firstName": "Bob", "lastName": "Smith", "email": "bob@example.com" } }

// Upcaster: transforms v1 events to v2 format on read
class CustomerRegisteredUpcaster {
  canUpcast(event) { return event.type === "CustomerRegistered" && event.version === 1; }

  upcast(event) {
    const [firstName, ...rest] = event.data.name.split(" ");
    return {
      ...event,
      version: 2,
      data: {
        firstName,
        lastName: rest.join(" ") || "",
        email: event.data.email,
      },
    };
  }
}

Idempotency

Projectors must be idempotent — processing the same event twice must produce the same result:

// BAD: not idempotent — reprocessing doubles the balance
async onMoneyDeposited(event) {
  await db.query(`UPDATE accounts SET balance = balance + $1 WHERE id = $2`,
    [event.data.amount, event.data.accountId]);
}

// GOOD: idempotent — uses event position as dedup key
async onMoneyDeposited(event) {
  await db.query(
    `UPDATE accounts
     SET balance = balance + $1, last_event_position = $2
     WHERE id = $3 AND last_event_position < $2`,
    [event.data.amount, event.globalPosition, event.data.accountId]
  );
}

Other Challenges

When to Use CQRS & Event Sourcing

Strong Fit ✓

DomainWhy It FitsReal-World Example
Financial systemsRegulatory requirement for complete audit trail; no data can ever be deletedBanking ledgers, trading platforms
Audit-heavy domainsNeed to know who changed what, when, and why — compliance, healthcareElectronic health records, insurance claims
Collaborative editingChanges must be mergeable; conflict resolution via event orderingGoogle Docs, Figma, Notion
Complex domain logicDomain models with intricate invariants benefit from explicit state transitionsBooking systems, supply chain
High read/write asymmetry10,000× more reads than writes — scale them independentlySocial media feeds, e-commerce catalogs

Poor Fit ✗

Hybrid approach: You don't have to go all-in. Many teams use CQRS without Event Sourcing (separate read/write DBs, but writes are still CRUD). Or they use Event Sourcing only for specific bounded contexts (the payments subdomain) while keeping CRUD elsewhere.

Real-World Architecture

Technology Stack Options

ComponentOptionsNotes
Event StoreEventStoreDB, Marten (PostgreSQL), custom on PostgreSQL/DynamoDBEventStoreDB is purpose-built; PostgreSQL works well for most scales
Message BusKafka, RabbitMQ, Amazon SNS/SQS, NATSKafka for high throughput + replay; RabbitMQ for simpler setups
Read Store (search)Elasticsearch, OpenSearch, AlgoliaFull-text search, faceted queries
Read Store (fast)Redis, DynamoDB, MemcachedSub-millisecond reads for dashboards, counters
Read Store (relational)PostgreSQL, MySQL (denormalized tables)Complex relational queries on denormalized data
FrameworkAxon (Java), Marten (.NET), Commanded (Elixir), EventSauce (PHP)Purpose-built CQRS/ES frameworks handle boilerplate

Production Architecture Diagram

                    ┌──────────────┐
                    │   API        │
                    │   Gateway    │
                    └─────┬────────┘
                          │
             ┌────────────┴────────────┐
             ▼                         ▼
    ┌──────────────┐          ┌──────────────┐
    │  Command API │          │  Query API   │
    │  (writes)    │          │  (reads)     │
    └──────┬───────┘          └──────▲───────┘
           │                         │
           ▼                         │
    ┌──────────────┐          ┌──────┴───────┐
    │  Command     │          │  Read Store  │
    │  Handlers    │          │  (ES + Redis │
    └──────┬───────┘          │  + Postgres) │
           │                  └──────▲───────┘
           ▼                         │
    ┌──────────────┐          ┌──────┴───────┐
    │  Event Store │────────▶ │  Projectors  │
    │  (PG/ESDB)   │  Kafka   │  (consumers) │
    └──────────────┘          └──────────────┘
           │
           ▼
    ┌──────────────┐
    │  Snapshots   │
    └──────────────┘

Interview Checklist

  1. Explain CQRS — separate models for reads and writes. Commands change state, queries don't.
  2. Explain Event Sourcing — store events, not state. Rebuild by replaying.
  3. Draw the architecture — write API → command handler → event store → projector → read store → query API.
  4. Discuss consistency — eventual consistency between write and read sides. Strategies to handle it.
  5. Mention snapshots — optimization for aggregates with many events.
  6. When to use — financial, audit, collaborative. Not for simple CRUD.
  7. Challenges — complexity, event versioning, projection rebuilds, learning curve.
  8. Relate to other patterns — Event-Driven Architecture, DDD Aggregates, Sagas.

Summary