Skip to content

Source Epochs

Source epochs solve the problem of phantom assertions: stale entity state that lingers on the server after a writer reconnects with a new snapshot.

Consider a game server (source 1) that asserts 10,000 player entities into ConvergeDB. The game server restarts and reconnects. In its new state, only 9,500 of those players are present. The other 500 have logged off.

Without epochs, the game server re-asserts its 9,500 players but never retracts the 500 that are gone. Those 500 entities remain alive in ConvergeDB with source 1’s bit set, even though source 1 no longer knows about them. This is a phantom assertion.

Over time, phantom assertions accumulate and corrupt the source bitset state of the database.

The epoch protocol lets the server compute the diff automatically:

  1. EpochBeginAsync(): The server snapshots the set of entities currently asserted by this source. This is the baseline.

  2. Re-assert the new snapshot: The writer sends ASSERT (or PATCH) for every entity in its new state. The server tracks which entities from the baseline have been re-asserted. This is the re-asserted set.

  3. EpochEndAsync(): The server computes stale = baseline - re-asserted and automatically synthesises a RETRACT for each stale entity.

// After reconnecting to the upstream data source
await convergenceClient.EpochAsync(async () =>
{
await using var batch = players.Batch();
foreach (var entity in newSnapshot)
await batch.AssertAsync(entity);
});
// Server has now retracted any entities from the old snapshot
// that were not in the new one.

If you need more control over error handling, use the Begin/End methods directly:

await convergenceClient.EpochBeginAsync();
try
{
await using var batch = players.Batch();
foreach (var entity in newSnapshot)
await batch.AssertAsync(entity);
await convergenceClient.EpochEndAsync();
}
catch
{
// If re-assertion fails, the epoch is abandoned.
// It will be discarded on disconnect or next EpochBeginAsync().
throw;
}

When EpochEndAsync() computes the stale set, it synthesises RETRACT operations for each stale entity. These retractions follow normal semantics:

  • Source bit cleared. Only this source’s bit is cleared from each stale entity’s source set.
  • Tombstoning. If no other source bits remain, the entity is tombstoned.
  • Coalescing. Stale retractions are subject to the normal coalescing window. They are batched with other pending writes.
  • Deduplication. If a stale entity was already retracted by another path (for example, through normal RETRACT calls before the epoch), the synthesised retract is a no-op.
  • Subscriber notifications. Subscribers see normal Deleted notifications for entities that become tombstoned, and no notification for entities that remain alive (because other sources still assert them).

Epoch retractions only affect the calling source’s bits. If source 1 runs an epoch and entity X was asserted by both source 1 and source 2, and source 1 does not re-assert entity X during the epoch:

  • Source 1’s bit is cleared from entity X.
  • Source 2’s bit remains set.
  • Entity X stays alive.
  • No subscriber notification is sent (the entity’s field data is unchanged, and it is still alive).

This is the correct behaviour: the epoch cleans up source 1’s stale state without interfering with source 2’s independent assertions.

Calling EpochBeginAsync() followed immediately by EpochEndAsync() with no asserts in between retracts all of this source’s entities. This is equivalent to “clear everything for this source” and is useful for clean shutdown:

// Clean shutdown: retract everything this source has asserted
await convergenceClient.EpochAsync(async () => { /* no asserts */ });

If the new snapshot matches the old one exactly, no retractions are synthesised, no version bumps occur, and no subscriber notifications are sent. The epoch is a silent no-op.

If the connection is lost while an epoch is active, the epoch is discarded. The server falls back to normal liveness deadline processing: after the deadline expires, entities with only this source’s bit are tombstoned.

Each source can have at most one active epoch. Calling EpochBeginAsync() while an epoch is already active throws a ProtocolException with error code 50. Calling EpochEndAsync() without an active epoch throws error code 51.

Epochs suppress event stream recording. Assertions, retractions, and patches made during an active epoch do not produce events for event-stream subscribers.

This is intentional: an epoch seeds “current” state, it does not record “what happened”. Event stream consumers building audit logs, analytics pipelines, or replay buffers should see only real business operations, not bulk seeding noise.

State notifications are not affected. Subscribers using standard subscriptions receive Created, Updated, and Deleted notifications as normal during epochs. The seeded entities are real state and subscribers need to know about them.

The same rule applies to the synthesised retractions at EpochEndAsync(): they produce state notifications (if the entity becomes tombstoned) but do not produce event stream entries. More broadly, all server-synthesised operations (epoch stale retractions and liveness deadline retractions) are excluded from event streams because they are not explicit source operations.

After the epoch completes, subsequent assertions from the source produce events normally.

Use epochs whenever a source receives a full-state snapshot that replaces its previous state:

  • Reconnection to an upstream database. The upstream service restarts or recovers from a network partition, and the writer receives a fresh full snapshot.
  • Periodic full sync. An external system sends a complete export on a schedule.
  • Clean shutdown. An empty epoch cleanly retracts all entities before the service goes down.

Do not use epochs for incremental updates. If your writer receives individual entity creates, updates, and deletes from upstream, use normal AssertAsync and RetractAsync calls. Epochs are designed for full-snapshot replacement, not incremental changes.