CQRS & Event Sourcing
The Fundamental Problem
Traditional CRUD architectures use a single model for both reading and writing data. This creates a painful mismatch:
| Concern | Write Side Needs | Read Side Needs |
|---|---|---|
| Data shape | Normalized, enforces invariants | Denormalized, pre-joined for speed |
| Scaling | Consistent, serialized, often single-leader | Massively parallel, cacheable, many replicas |
| Throughput ratio | Typically 1–10% of total ops | Typically 90–99% of total ops |
| Optimization | Validation, domain rules, concurrency control | Indexes, materialized views, caching layers |
| History | Only current state | Often needs audit trail, time-travel queries |
When you force a single model to serve both, you get impedance mismatch: writes become slow because of read indexes, reads become slow because of normalization, and neither side can scale independently. CQRS and Event Sourcing solve this by splitting the model and changing what you store.
CQRS — Command Query Responsibility Segregation
CQRS is a pattern where the command model (writes) and the query model (reads) are completely separated — different data models, different services, often different databases.
Core Principles
- Commands are imperative:
PlaceOrder,TransferFunds,CancelReservation. They express intent to change state. They can be rejected (validation fails, business rule violated). - Queries are interrogative:
GetOrderDetails,ListRecentTransactions. They never modify state. They always succeed (returning empty is still a success). - Write model enforces invariants and domain logic. It is the source of truth for how data changes.
- Read model is an optimized projection — a materialized view shaped for the UI/API consumer.
Architecture Breakdown
┌──────────────────────────── WRITE SIDE ────────────────────────────┐
│ │
│ Client ──→ API Gateway ──→ Command Handler ──→ Domain Model │
│ │ │
│ ▼ │
│ Write Database │
│ (normalized, ACID) │
│ │ │
│ ▼ │
│ Domain Events │
│ published to bus │
└─────────────────────────────────────────────────────────────────────┘
│
┌───────────▼───────────┐
│ Message Bus / │
│ Event Broker │
│ (Kafka, RabbitMQ) │
└───────────┬───────────┘
│
┌───────────────────────────────▼─── READ SIDE ──────────────────────┐
│ │
│ Event Handler / Projector ──→ Read Database(s) │
│ (denormalized, pre-joined, │
│ Elasticsearch, Redis, etc.) │
│ │ │
│ ▼ │
│ Query API ──→ Client │
└─────────────────────────────────────────────────────────────────────┘
The Command Flow in Detail
// 1. Client sends a command
POST /api/orders
{
"type": "PlaceOrder",
"customerId": "cust-42",
"items": [
{ "productId": "prod-101", "quantity": 2, "price": 29.99 },
{ "productId": "prod-205", "quantity": 1, "price": 49.99 }
],
"shippingAddress": { "city": "Seattle", "zip": "98101" }
}
// 2. Command handler validates & delegates to domain
class PlaceOrderHandler {
async handle(cmd: PlaceOrder): Promise<void> {
// Load aggregate from write store
const customer = await this.customerRepo.load(cmd.customerId);
const inventory = await this.inventoryRepo.checkAvailability(cmd.items);
// Domain validation
if (!customer.isActive) throw new BusinessRuleViolation("Inactive customer");
if (!inventory.allAvailable) throw new BusinessRuleViolation("Out of stock");
if (customer.outstandingBalance > customer.creditLimit)
throw new BusinessRuleViolation("Credit limit exceeded");
// Create order aggregate — this produces domain events
const order = Order.create({
customerId: cmd.customerId,
items: cmd.items,
shippingAddress: cmd.shippingAddress,
});
// Persist and publish events atomically
await this.orderRepo.save(order); // writes to DB
await this.eventBus.publish(order.events); // OrderPlaced, InventoryReserved
}
}
// 3. Domain events produced
{
"type": "OrderPlaced",
"orderId": "ord-9001",
"customerId": "cust-42",
"items": [...],
"totalAmount": 109.97,
"timestamp": "2026-04-15T10:23:45Z",
"version": 1
}
The Query Flow in Detail
// Query reads from the optimized read model — no joins, no domain logic
GET /api/orders/ord-9001
// Read model (stored in Elasticsearch or Redis)
{
"orderId": "ord-9001",
"customerName": "Alice Johnson", // pre-joined from customer data
"customerEmail": "alice@example.com", // pre-joined
"status": "CONFIRMED",
"items": [
{
"productName": "Wireless Mouse", // pre-joined from product catalog
"quantity": 2,
"unitPrice": 29.99,
"lineTotal": 59.98
},
{
"productName": "USB-C Hub",
"quantity": 1,
"unitPrice": 49.99,
"lineTotal": 49.99
}
],
"totalAmount": 109.97,
"shippingAddress": "Seattle, WA 98101",
"estimatedDelivery": "2026-04-20", // pre-computed
"placedAt": "2026-04-15T10:23:45Z"
}
// The read model is FLAT — no need to join orders + customers + products.
// It's shaped exactly for the UI. One query, one response.
The Projector — Building Read Models
A projector (also called an event handler or denormalizer) listens to domain events and updates the read model:
class OrderProjector {
// Called when OrderPlaced event is received from the bus
async onOrderPlaced(event: OrderPlacedEvent): Promise<void> {
const customer = await this.customerReadStore.get(event.customerId);
const products = await this.productReadStore.getMany(
event.items.map(i => i.productId)
);
const readModel = {
orderId: event.orderId,
customerName: customer.name,
customerEmail: customer.email,
status: "CONFIRMED",
items: event.items.map(item => ({
productName: products[item.productId].name,
quantity: item.quantity,
unitPrice: item.price,
lineTotal: item.quantity * item.price,
})),
totalAmount: event.totalAmount,
shippingAddress: formatAddress(event.shippingAddress),
estimatedDelivery: computeDeliveryDate(event.shippingAddress),
placedAt: event.timestamp,
};
// Upsert into read store (Elasticsearch, DynamoDB, Redis, etc.)
await this.readStore.upsert(`order:${event.orderId}`, readModel);
}
async onOrderShipped(event: OrderShippedEvent): Promise<void> {
await this.readStore.update(`order:${event.orderId}`, {
status: "SHIPPED",
trackingNumber: event.trackingNumber,
shippedAt: event.timestamp,
});
}
async onOrderCancelled(event: OrderCancelledEvent): Promise<void> {
await this.readStore.update(`order:${event.orderId}`, {
status: "CANCELLED",
cancelledAt: event.timestamp,
cancellationReason: event.reason,
});
}
}
Interactive: CQRS Flow
Step through the full CQRS lifecycle — from command submission to query response:
Event Sourcing — Store Events, Not State
Instead of storing the current state of an entity, Event Sourcing stores every state change as an immutable event. The current state is derived by replaying all events from the beginning (or from a snapshot).
CRUD vs Event Sourcing
| Aspect | CRUD | Event Sourcing |
|---|---|---|
| What's stored | Current state (latest row) | Sequence of all events |
| UPDATE operation | Overwrites previous value | Appends new event (old events untouched) |
| History | Lost unless you add audit tables | Complete history by default |
| Current state | Direct read from DB | Replay events (or read snapshot) |
| Storage cost | O(entities) | O(events) — grows over time |
| Debugging | "What happened?" — unknown | Replay the exact sequence |
Event Store Schema
An event store is an append-only database optimized for writing and reading event streams. Here is a concrete schema:
-- Core event store table
CREATE TABLE events (
-- Global ordering: monotonically increasing, no gaps
global_position BIGSERIAL PRIMARY KEY,
-- Stream identification (e.g., "BankAccount-acct-42")
stream_id VARCHAR(255) NOT NULL,
-- Position within the stream (0, 1, 2, ...)
stream_position INTEGER NOT NULL,
-- Event payload
event_type VARCHAR(255) NOT NULL, -- e.g., "MoneyDeposited"
data JSONB NOT NULL, -- event payload
metadata JSONB DEFAULT '{}', -- correlation ID, causation ID, user
-- Timestamps
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Optimistic concurrency: unique per stream
UNIQUE (stream_id, stream_position)
);
-- Index for reading a full stream in order
CREATE INDEX idx_events_stream ON events (stream_id, stream_position);
-- Index for global ordering (for projections that process all events)
CREATE INDEX idx_events_global ON events (global_position);
-- Snapshot table for performance
CREATE TABLE snapshots (
stream_id VARCHAR(255) PRIMARY KEY,
stream_position INTEGER NOT NULL, -- event position of snapshot
data JSONB NOT NULL, -- serialized aggregate state
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Subscriptions / checkpoints for projectors
CREATE TABLE projector_checkpoints (
projector_name VARCHAR(255) PRIMARY KEY,
last_position BIGINT NOT NULL DEFAULT 0, -- global_position last processed
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
Appending Events (Write Path)
-- Appending an event with optimistic concurrency control
-- The UNIQUE constraint on (stream_id, stream_position) prevents conflicts.
-- If two writers try to append at the same position, one fails → retry.
INSERT INTO events (stream_id, stream_position, event_type, data, metadata)
VALUES (
'BankAccount-acct-42', -- stream_id
3, -- expected next position
'MoneyDeposited', -- event type
'{"amount": 100.00, "currency": "USD", "description": "Payroll"}',
'{"correlationId": "tx-7891", "userId": "user-5", "timestamp": "2026-04-15T10:30:00Z"}'
);
-- If stream_position 3 already exists → UNIQUE violation → concurrency conflict!
Loading an Aggregate (Replay)
class BankAccount {
id: string;
balance: number = 0;
status: string = "UNKNOWN";
version: number = -1;
// Rebuild state by applying each event in order
static fromEvents(events: DomainEvent[]): BankAccount {
const account = new BankAccount();
for (const event of events) {
account.apply(event);
account.version = event.streamPosition;
}
return account;
}
// Each event type has a specific state transition
private apply(event: DomainEvent): void {
switch (event.type) {
case "AccountCreated":
this.id = event.data.accountId;
this.balance = 0;
this.status = "ACTIVE";
break;
case "MoneyDeposited":
this.balance += event.data.amount;
break;
case "MoneyWithdrawn":
this.balance -= event.data.amount;
break;
case "AccountClosed":
this.status = "CLOSED";
break;
}
}
}
// Load from event store
async function loadAccount(accountId: string): Promise<BankAccount> {
const streamId = `BankAccount-${accountId}`;
// Check for snapshot first
const snapshot = await db.query(
`SELECT * FROM snapshots WHERE stream_id = $1`, [streamId]
);
let events: DomainEvent[];
let account: BankAccount;
if (snapshot.rows.length > 0) {
// Resume from snapshot
account = BankAccount.fromSnapshot(snapshot.rows[0].data);
account.version = snapshot.rows[0].stream_position;
// Load only events AFTER the snapshot
events = await db.query(
`SELECT * FROM events
WHERE stream_id = $1 AND stream_position > $2
ORDER BY stream_position`,
[streamId, snapshot.rows[0].stream_position]
);
} else {
// No snapshot — replay all events
events = await db.query(
`SELECT * FROM events
WHERE stream_id = $1
ORDER BY stream_position`,
[streamId]
);
account = new BankAccount();
}
for (const event of events.rows) {
account.apply(event);
account.version = event.stream_position;
}
return account;
}
Snapshots for Performance
Replaying thousands of events for every command is expensive. Snapshots periodically capture the aggregate state:
// Take a snapshot every N events
async function saveWithSnapshot(
account: BankAccount,
newEvents: DomainEvent[],
snapshotInterval: number = 100
): Promise<void> {
const streamId = `BankAccount-${account.id}`;
// Append new events
for (const event of newEvents) {
await appendEvent(streamId, event);
}
// Check if we should create a snapshot
if (account.version % snapshotInterval === 0) {
await db.query(
`INSERT INTO snapshots (stream_id, stream_position, data)
VALUES ($1, $2, $3)
ON CONFLICT (stream_id) DO UPDATE SET
stream_position = $2, data = $3, created_at = NOW()`,
[streamId, account.version, JSON.stringify(account.toSnapshot())]
);
}
}
// Snapshot payload
{
"accountId": "acct-42",
"balance": 5320.50,
"status": "ACTIVE",
"snapshotPosition": 200, // event # this snapshot represents
"takenAt": "2026-04-15T12:00:00Z"
}
// Without snapshot: replay events 0..250 (250 queries + computations)
// With snapshot: load snapshot at 200, replay events 201..250 (50 computations)
- Every N events — simplest; snapshot at positions 100, 200, 300…
- Time-based — snapshot if aggregate hasn't been snapshotted in the last hour
- On read — when loading exceeds a threshold number of events, create a snapshot for next time
- Background process — periodically scan for aggregates that need new snapshots
Interactive: Event Sourcing Replay
Watch how state is rebuilt by replaying events, and how snapshots accelerate the process:
CQRS + Event Sourcing Together
CQRS and Event Sourcing are independent patterns, but they are natural partners:
┌─── COMMAND SIDE ───────────────────────────────────────────────────┐
│ │
│ Command ──→ Command Handler ──→ Load Aggregate (replay events) │
│ │ │
│ Domain Logic │
│ │ │
│ New Events ──→ Event Store │
│ (append-only) │
└────────────────────────────────────────┬────────────────────────────┘
│
Events published to bus
│
┌────────────────────────────────────────▼──── QUERY SIDE ───────────┐
│ │
│ Projector A ──→ Elasticsearch (full-text search) │
│ Projector B ──→ Redis (dashboard counters) │
│ Projector C ──→ PostgreSQL (relational queries) │
│ Projector D ──→ Neo4j (graph queries) │
│ │
│ Query API reads from the appropriate read store │
└─────────────────────────────────────────────────────────────────────┘
The event store serves as both the write-side persistence and the source for read-side projections. Multiple projectors can consume the same event stream and build completely different read models — one for search, one for analytics, one for the dashboard.
Full End-to-End Example: Bank Transfer
// 1. COMMAND: TransferMoney
{
"type": "TransferMoney",
"fromAccountId": "acct-42",
"toAccountId": "acct-99",
"amount": 500.00,
"description": "Rent payment"
}
// 2. COMMAND HANDLER
class TransferMoneyHandler {
async handle(cmd: TransferMoney): Promise<void> {
// Load both aggregates by replaying their events
const fromAccount = await this.repo.load(cmd.fromAccountId);
const toAccount = await this.repo.load(cmd.toAccountId);
// Domain validation
fromAccount.withdraw(cmd.amount, cmd.description); // throws if insufficient funds
toAccount.deposit(cmd.amount, `Transfer from ${cmd.fromAccountId}`);
// Save both — each appends events to their stream
await this.repo.save(fromAccount); // MoneyWithdrawn event
await this.repo.save(toAccount); // MoneyDeposited event
// Publish: TransferCompleted event for saga/process manager
}
}
// 3. EVENTS STORED (immutable, append-only)
// Stream: BankAccount-acct-42
{ "type": "MoneyWithdrawn", "amount": 500.00, "description": "Rent payment",
"balance_after": 4820.50, "stream_position": 203 }
// Stream: BankAccount-acct-99
{ "type": "MoneyDeposited", "amount": 500.00, "description": "Transfer from acct-42",
"balance_after": 12500.00, "stream_position": 87 }
// 4. PROJECTORS update read models asynchronously
// → Account balance dashboard (Redis)
// → Transaction history (Elasticsearch)
// → Monthly statement (PostgreSQL)
// → Fraud detection (Kafka Streams)
// 5. QUERY: Get account summary
GET /api/accounts/acct-42/summary
{
"accountId": "acct-42",
"balance": 4820.50,
"recentTransactions": [
{ "date": "2026-04-15", "desc": "Rent payment", "amount": -500.00 },
{ "date": "2026-04-14", "desc": "Payroll", "amount": 3200.00 },
...
]
}
Temporal Queries & Audit Trails
One of the most powerful benefits of Event Sourcing: you can answer "What was the state at any point in time?"
// Time-travel query: What was the balance on March 1st?
async function getBalanceAt(accountId: string, asOf: Date): Promise<number> {
const streamId = `BankAccount-${accountId}`;
// Load events up to (but not after) the given timestamp
const events = await db.query(
`SELECT * FROM events
WHERE stream_id = $1 AND created_at <= $2
ORDER BY stream_position`,
[streamId, asOf]
);
const account = BankAccount.fromEvents(events.rows);
return account.balance;
}
// Audit trail: Who did what, when?
SELECT
event_type,
data->>'amount' AS amount,
data->>'description' AS description,
metadata->>'userId' AS performed_by,
metadata->>'correlationId' AS transaction_id,
created_at
FROM events
WHERE stream_id = 'BankAccount-acct-42'
ORDER BY stream_position;
-- Result:
-- AccountCreated | null | null | system | tx-001 | 2025-01-15
-- MoneyDeposited | 1000 | Initial deposit | user-5 | tx-002 | 2025-01-15
-- MoneyDeposited | 3200 | Payroll | system | tx-100 | 2025-02-01
-- MoneyWithdrawn | 500 | Rent payment | user-5 | tx-101 | 2025-02-05
-- ...every single change is recorded forever
Debugging by Replaying
When a bug produces incorrect state, you can replay the exact sequence of events to reproduce it:
// Replay to debug: "Why does account acct-42 have a negative balance?"
const events = await loadAllEvents("BankAccount-acct-42");
const account = new BankAccount();
for (const event of events) {
console.log(`Event #${event.stream_position}: ${event.event_type}`);
console.log(` Before: balance=${account.balance}`);
account.apply(event);
console.log(` After: balance=${account.balance}`);
if (account.balance < 0) {
console.error(`BUG FOUND at event #${event.stream_position}!`);
console.error(`Event data:`, event.data);
console.error(`Metadata:`, event.metadata);
break;
}
}
// Output:
// Event #201: MoneyDeposited
// Before: balance=200.00
// After: balance=3400.00
// Event #202: MoneyWithdrawn
// Before: balance=3400.00
// After: balance=2900.00
// Event #203: MoneyWithdrawn ← BUG: concurrent withdrawal not caught
// Before: balance=2900.00
// After: balance=-100.00
// BUG FOUND at event #203!
Eventual Consistency & Its Challenges
In CQRS, the read model is eventually consistent with the write model. After a command succeeds, there is a propagation delay before the read model reflects the change.
Consistency Timeline
Time ──────────────────────────────────────────────────────────────▶
T0: Client sends "Deposit $100" command
T1: Write model processes command, appends event [~5ms]
T2: Event published to message bus [~10ms]
T3: Projector receives event [~50ms]
T4: Projector updates read model [~20ms]
T5: Read model reflects new balance [total ~85ms]
Problem: If client queries at T2 (before T5), they see STALE data.
The deposit succeeded, but the balance hasn't updated yet.
Strategies for Handling Eventual Consistency
| Strategy | How It Works | Trade-off |
|---|---|---|
| Read-your-writes | After a command, route that user's next read to the write model (or wait for projection) | Adds complexity to routing |
| Optimistic UI | Client updates UI immediately on command success, before read model confirms | May need rollback if command fails |
| Polling / SSE | Client polls or subscribes to updates, refreshes when projection is ready | Slight delay visible to user |
| Version-based | Command returns version number; client includes it in query; query waits until read model catches up | Increases query latency |
| Synchronous projection | Update read model in the same transaction as the write (defeats purpose of CQRS for scaling) | Strong consistency but no independent scaling |
// Version-based read-your-writes example
// Command response includes the version
POST /api/accounts/acct-42/deposit → { "version": 204 }
// Client passes version to query
GET /api/accounts/acct-42/summary?after_version=204
// Server waits (with timeout) for read model to reach version 204
async function queryWithConsistency(accountId, minVersion, timeout = 5000) {
const start = Date.now();
while (Date.now() - start < timeout) {
const readModel = await readStore.get(`account:${accountId}`);
if (readModel.version >= minVersion) return readModel;
await sleep(50); // poll every 50ms
}
throw new Error("Read model consistency timeout");
}
Challenges & Complexity
Event Versioning & Schema Evolution
Events are stored forever. But your domain evolves. How do you handle schema changes?
// Version 1 — original event
{ "type": "CustomerRegistered", "version": 1,
"data": { "name": "Alice Johnson", "email": "alice@example.com" } }
// Version 2 — name split into first/last
{ "type": "CustomerRegistered", "version": 2,
"data": { "firstName": "Bob", "lastName": "Smith", "email": "bob@example.com" } }
// Upcaster: transforms v1 events to v2 format on read
class CustomerRegisteredUpcaster {
canUpcast(event) { return event.type === "CustomerRegistered" && event.version === 1; }
upcast(event) {
const [firstName, ...rest] = event.data.name.split(" ");
return {
...event,
version: 2,
data: {
firstName,
lastName: rest.join(" ") || "",
email: event.data.email,
},
};
}
}
Idempotency
Projectors must be idempotent — processing the same event twice must produce the same result:
// BAD: not idempotent — reprocessing doubles the balance
async onMoneyDeposited(event) {
await db.query(`UPDATE accounts SET balance = balance + $1 WHERE id = $2`,
[event.data.amount, event.data.accountId]);
}
// GOOD: idempotent — uses event position as dedup key
async onMoneyDeposited(event) {
await db.query(
`UPDATE accounts
SET balance = balance + $1, last_event_position = $2
WHERE id = $3 AND last_event_position < $2`,
[event.data.amount, event.globalPosition, event.data.accountId]
);
}
Other Challenges
- Learning curve: Developers used to CRUD must think in events, commands, and projections — fundamentally different mental model.
- Tooling maturity: Fewer ORMs, fewer framework options compared to CRUD.
- Event store size: Events grow forever. Requires archival strategy, compaction, or cold storage for old events.
- Rebuilding projections: If a projector has a bug, you may need to replay millions of events to fix the read model. This can take hours.
- Distributed transactions: Cross-aggregate operations require sagas or process managers, which add complexity.
When to Use CQRS & Event Sourcing
Strong Fit ✓
| Domain | Why It Fits | Real-World Example |
|---|---|---|
| Financial systems | Regulatory requirement for complete audit trail; no data can ever be deleted | Banking ledgers, trading platforms |
| Audit-heavy domains | Need to know who changed what, when, and why — compliance, healthcare | Electronic health records, insurance claims |
| Collaborative editing | Changes must be mergeable; conflict resolution via event ordering | Google Docs, Figma, Notion |
| Complex domain logic | Domain models with intricate invariants benefit from explicit state transitions | Booking systems, supply chain |
| High read/write asymmetry | 10,000× more reads than writes — scale them independently | Social media feeds, e-commerce catalogs |
Poor Fit ✗
- Simple CRUD apps — a blog, a to-do list, a basic admin panel. The overhead of separate read/write models is not justified.
- High-throughput, low-latency systems where eventual consistency is unacceptable (e.g., real-time game state sync).
- Greenfield with a small team — the learning curve and infrastructure overhead can kill velocity. Start with CRUD, evolve to CQRS when needed.
- Domains without history requirements — if you never need "what happened before," event sourcing adds cost without benefit.
Real-World Architecture
Technology Stack Options
| Component | Options | Notes |
|---|---|---|
| Event Store | EventStoreDB, Marten (PostgreSQL), custom on PostgreSQL/DynamoDB | EventStoreDB is purpose-built; PostgreSQL works well for most scales |
| Message Bus | Kafka, RabbitMQ, Amazon SNS/SQS, NATS | Kafka for high throughput + replay; RabbitMQ for simpler setups |
| Read Store (search) | Elasticsearch, OpenSearch, Algolia | Full-text search, faceted queries |
| Read Store (fast) | Redis, DynamoDB, Memcached | Sub-millisecond reads for dashboards, counters |
| Read Store (relational) | PostgreSQL, MySQL (denormalized tables) | Complex relational queries on denormalized data |
| Framework | Axon (Java), Marten (.NET), Commanded (Elixir), EventSauce (PHP) | Purpose-built CQRS/ES frameworks handle boilerplate |
Production Architecture Diagram
┌──────────────┐
│ API │
│ Gateway │
└─────┬────────┘
│
┌────────────┴────────────┐
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Command API │ │ Query API │
│ (writes) │ │ (reads) │
└──────┬───────┘ └──────▲───────┘
│ │
▼ │
┌──────────────┐ ┌──────┴───────┐
│ Command │ │ Read Store │
│ Handlers │ │ (ES + Redis │
└──────┬───────┘ │ + Postgres) │
│ └──────▲───────┘
▼ │
┌──────────────┐ ┌──────┴───────┐
│ Event Store │────────▶ │ Projectors │
│ (PG/ESDB) │ Kafka │ (consumers) │
└──────────────┘ └──────────────┘
│
▼
┌──────────────┐
│ Snapshots │
└──────────────┘
Interview Checklist
- ☐ Explain CQRS — separate models for reads and writes. Commands change state, queries don't.
- ☐ Explain Event Sourcing — store events, not state. Rebuild by replaying.
- ☐ Draw the architecture — write API → command handler → event store → projector → read store → query API.
- ☐ Discuss consistency — eventual consistency between write and read sides. Strategies to handle it.
- ☐ Mention snapshots — optimization for aggregates with many events.
- ☐ When to use — financial, audit, collaborative. Not for simple CRUD.
- ☐ Challenges — complexity, event versioning, projection rebuilds, learning curve.
- ☐ Relate to other patterns — Event-Driven Architecture, DDD Aggregates, Sagas.
Summary
- CQRS separates reads and writes into independent models that can scale, evolve, and optimize independently.
- Event Sourcing stores every state change as an immutable event, providing a complete audit trail, temporal queries, and the ability to debug by replaying.
- Together, they enable powerful architectures: multiple read stores from a single event stream, each shaped for its consumer.
- Snapshots solve the performance problem of replaying thousands of events per request.
- Eventual consistency is the main trade-off — use read-your-writes, optimistic UI, or version-based strategies to handle it.
- Not for everything — the complexity is justified only for domains that genuinely need audit trails, temporal queries, or extreme read/write asymmetry.