Tesseras

Phase 2: Memories Survive

2026-02-14

A tessera is no longer tied to a single machine. Phase 2 delivers the replication layer: data is split into erasure-coded fragments, distributed across multiple peers, and automatically repaired when nodes go offline. A bilateral reciprocity ledger ensures fair storage exchange — no blockchain, no tokens.

What was built

tesseras-core (updated) — New replication domain types: FragmentPlan (selects fragmentation tier based on tessera size), FragmentId (tessera hash + index + shard count + checksum), FragmentEnvelope (fragment with its metadata for wire transport), FragmentationTier (Small/Medium/Large), Attestation (proof that a node holds a fragment at a given time), and ReplicateAck (acknowledgement of fragment receipt). Three new port traits define the hexagonal boundaries: DhtPort (find peers, replicate fragments, request attestations, ping), FragmentStore (store/read/delete/list/verify fragments), and ReciprocityLedger (record storage exchanges, query balances, find best peers). Maximum tessera size is 1 GB.

tesseras-crypto (updated) — The existing ReedSolomonCoder now powers fragment encoding. Data is split into shards, parity shards are computed, and any combination of data shards can reconstruct the original — as long as the number of missing shards does not exceed the parity count.

tesseras-storage (updated) — Two new adapters:

New migration (002_replication.sql) adds tables for fragments, fragment plans, holders, holder-fragment mappings, and reciprocity balances.

tesseras-dht (updated) — Four new message variants: Replicate (send a fragment envelope), ReplicateAck (confirm receipt), AttestRequest (ask a node to prove it holds a tessera's fragments), and AttestResponse (return attestation with checksums and timestamp). The engine handles these in its message dispatch loop.

tesseras-replication — The new crate, with five modules:

tesd (updated) — The daemon now opens a SQLite database (db/tesseras.db), runs migrations, creates FsFragmentStore, SqliteReciprocityLedger, and FsBlobStore instances, wraps the DHT engine in a DhtPortAdapter, builds a ReplicationService, and spawns the repair loop as a background task with graceful shutdown.

Testing — 193 tests across the workspace:

Architecture decisions

What comes next

Nodes can find each other and keep each other's memories alive. Next, we give people a way to hold their memories in their hands.