Skip to content

Performance

This page presents benchmark results from convergence-loadgen against convergence-server. Numbers are updated as new scenarios are tested.

ComponentDetail
MachineApple M1 Pro, 16 GB unified memory
OSmacOS (ARM64)
SetupServer and loadgen collocated on the same machine

All benchmarks use release builds (--release).

Pure write workload with no active subscribers. Measures the server’s ingress and convergence engine throughput in isolation from fanout costs.

Configuration:

ParameterValue
Workers9 (each a separate source)
Entity pool10,000 (Player schema)
Operation mix70% ASSERT / 20% PATCH / 10% RETRACT
Rate limitNone (unlimited)
Subscribers0

Results:

MetricValue
Aggregate throughput3.7M ops/sec
Per-worker average~411K ops/sec
Per-worker range364K — 477K ops/sec
Total operations70.7M
Errors0

The design target is 200,000 ASSERT operations per second per partition on a single core. This benchmark exceeds that target per-worker, with a mixed operation workload rather than pure ASSERTs.

Caveats: Server and loadgen share CPU and memory. No subscribers means no fanout overhead. No concurrent read workload.

Terminal window
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--workers 9 \
--entities 10000 \
--schema player

Not yet measured.

Adds active subscribers to measure the impact of fanout on write throughput and to capture notification latency.

ParameterPlanned
Write workloadSame as above
Subscribers1, 10, 100
MetricsWrite throughput degradation, notification rate

Design target: notification latency within one coalescing window of the last ASSERT.

Terminal window
# 9 writers + 10 subscribers
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--workers 9 \
--subscribers 10 \
--entities 10000 \
--schema player
# 9 writers + 100 subscribers
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--workers 9 \
--subscribers 100 \
--entities 10000 \
--schema player

Not yet measured.

Measures QUERY response time under varying write load.

ParameterPlanned
Background write rate0, 100K, 200K ops/sec
ReaderDedicated client issuing point QUERYs
Metricsp50, p99, p999 latency

Design target: 5 microseconds at p50, 20 microseconds at p99.

Not yet measured.

Measures time for a new subscriber to receive the full initial state of a large entity set.

ParameterPlanned
Entity count100,000 pre-populated
MetricsTime to receive complete bootstrap

Design target: under 2 seconds for 100,000 entities.

Terminal window
# Step 1: populate 100K entities
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--workers 4 \
--entities 100000 \
--schema player \
--mix 100/0/0 \
--duration 10s \
--quiet
# Step 2: subscribe with bootstrap and measure
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--subscribers 1 \
--bootstrap \
--schema player \
--duration 30s \
--quiet

Not yet measured.

Measures the ratio of incoming writes to WAL flush writes under hot-entity workloads.

ParameterPlanned
Write patternSmall hot set (10% of entities receiving 90% of writes)
MetricsIncoming ASSERTs per WAL flush write

Design target: 10:1 or better for entities updating more than 10 times per coalescing window.

Terminal window
# Start the server
cargo run --release -p convergence-server
# Run the write throughput benchmark (scenario 1)
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--workers 9 \
--entities 10000 \
--schema player
# Run with subscribers (scenario 2)
cargo run --release -p convergence-loadgen -- \
--server 127.0.0.1:3727 \
--workers 9 \
--subscribers 10 \
--entities 10000 \
--schema player

Run cargo run -p convergence-loadgen -- --help for all options including --rate, --mix, --hot-pct, --duration, --quiet, --subscribers, --bootstrap, and --include-prev.