Mesh Transfer
MVP Shipped
Mesh Transfer is the content-addressed block exchange system for Nexus OS. It moves CAS (Content-Addressable Storage) blocks between nodes over WebSocket, with BLAKE3 integrity verification, compression negotiation, and DAG synchronization. This is how NexFS data replicates across the mesh.
Architecture
┌─────────────────────────────────────────────────────────┐
│ Mesh Daemon │
│ WebSocket listener (IPv6, AF_INET6) │
│ Peer table (64 entries) │
├─────────────────────────────────────────────────────────┤
│ SbiTransport │
│ WebSocket binary frames → SBI codec → frame dispatch │
│ Capability negotiation (HANDSHAKE) │
├─────────────────────────────────────────────────────────┤
│ MeshTransfer │
│ BLOCK_WANT / BLOCK_PUT / DAG_SYNC handlers │
│ Wired to CasManager │
├─────────────────────────────────────────────────────────┤
│ CasManager │
│ BLAKE3-addressed get/put │
│ NexFS CAS backend (via C-FFI) │
├─────────────────────────────────────────────────────────┤
│ Compression Codec │
│ Zstd / Deflate / Passthrough │
└─────────────────────────────────────────────────────────┘Each layer has a single responsibility:
- Mesh Daemon – Accepts connections, manages peer lifecycle, discovers peers
- SbiTransport – Frames bytes into SBI messages, dispatches by op code
- MeshTransfer – Business logic: CAS lookups, DAG walks, block verification
- CasManager – Storage backend: content-addressed get/put with BLAKE3 keys
- Compression Codec – Transparent compress/decompress for BLOCK_PUT payloads
No shared mutable state between layers. Each handler is a pure function of its inputs plus CAS state.
Block Exchange Flow
The core exchange is simple: one node wants a block, the other has it.
Requester Provider
───────── ────────
│ │
│── HANDSHAKE(caps, features) ────────→│
│←─ HANDSHAKE(caps, features) ─────────│
│ [negotiate compression, verify] │
│ │
│── BLOCK_WANT(blake3_hash, pri) ─────→│
│ │
│ [Provider: CasManager.get(hash)] │
│ [Provider: compress(data, algo)] │
│ │
│←─ BLOCK_PUT(hash, data, comp) ───────│
│ │
│ [Requester: decompress(data)] │
│ [Requester: verify BLAKE3] │
│ [Requester: CasManager.put(data)] │
│ │
│── ACK(seq) ─────────────────────────→│
│ │Verification
Every received block is verified before storage:
- Decompress the payload (using
comp_algofrom the BLOCK_PUT spine) - Compute
BLAKE3(decompressed_data) - Compare against
block_hashfrom the BLOCK_PUT spine - If mismatch: send
NACK("hash_mismatch"), discard data - If match:
CasManager.put(data)stores the block
No block enters CAS without passing BLAKE3 verification. Content integrity is non-negotiable.
DAG Synchronization
DAG sync determines which blocks a peer is missing without transferring the blocks themselves. It uses Merkle tree comparison:
Node A Node B
────── ──────
│ │
│── DAG_SYNC(root_hash, depth, ───────→│
│ local_node_hashes) │
│ │
│ [Node B: compare against local] │
│ [Node B: compute missing set] │
│ │
│←─ DAG_SYNC(root_hash, depth, ────────│
│ missing_node_hashes) │
│ │
│── BLOCK_PUT(cid_1, data) ───────────→│
│── BLOCK_PUT(cid_2, data) ───────────→│
│── BLOCK_PUT(cid_n, data) ───────────→│
│←─ ACK(seq) ──────────────────────────│
│ │The depth field bounds traversal – a peer does not walk deeper than depth levels into the DAG. This prevents unbounded computation when syncing large trees. The typical sync pattern:
- Exchange root hashes to check if trees diverge
- If roots differ, exchange node hashes at increasing depth
- Once the missing set is identified, transfer blocks via BLOCK_PUT
- Repeat until roots converge
Compression Negotiation
Compression is negotiated once during HANDSHAKE and applied per-block in BLOCK_PUT:
| Step | Detail |
|---|---|
| 1. HANDSHAKE | Both peers declare capabilities (HAS_ZSTD, HAS_DEFLATE) |
| 2. Intersection | Sender computes local.caps & remote.caps |
| 3. Selection | Sender picks best mutual algorithm (Zstd > Deflate > None) |
| 4. Per-block | Each BLOCK_PUT carries comp_algo + comp_level in its spine |
| 5. Receiver | Reads comp_algo, decompresses, then verifies BLAKE3 |
The sender may choose different algorithms for different blocks – a small block might skip compression entirely (comp_algo = 0x00) while a large block uses Zstd at level 3. The comp_algo field in each BLOCK_PUT is authoritative.
Codec Details
| Algorithm | Default Level | Use Case |
|---|---|---|
| Zstd | 3 | General purpose – good ratio, low latency |
| Deflate | 6 | Graf interop where Zstd is unavailable |
| None | – | Already-compressed content, tiny blocks |
BLAKE3 Content Addressing
All content in the mesh is addressed by BLAKE3-256 – 32-byte digests computed via C FFI to the reference BLAKE3 library:
| Property | Value |
|---|---|
| Algorithm | BLAKE3 (C library, not BLAKE2b) |
| Digest size | 32 bytes (256 bits) |
| Performance | ~3x faster than SHA-256, parallelizable |
| Use | Block CIDs, DAG node hashes, schema fingerprints |
CLD Schema Fingerprints
Every SBI message carries a schema fingerprint in its preamble – a BLAKE3 hash of the canonical CLD (Container Layout Descriptor). This binds each wire message to its exact struct layout. If a peer upgrades its schema, the fingerprint changes, and the remote peer can detect the mismatch before misinterpreting fields.
fingerprint = BLAKE3(canonical_cld_bytes)This is schema evolution without silent corruption. A receiver that sees an unknown fingerprint sends NACK("unknown_schema") instead of parsing garbage.
Peer Discovery
Yggdrasil
The mesh daemon discovers peers via Yggdrasil – an encrypted IPv6 overlay network. On boot, the daemon queries yggdrasilctl for the local node's peers and their IPv6 addresses:
yggdrasilctl -json getPeersDiscovered peers are added to the peer table and contacted via WebSocket. Yggdrasil provides the encrypted transport; the mesh daemon provides the block exchange protocol on top.
Static Peers
Peers can also be configured statically in the node's configuration. Static peers are contacted on startup and maintained across restarts.
Mesh Daemon
The mesh daemon is the application-layer process that runs the mesh transfer protocol.
Transport
- WebSocket binary frames over TCP (IPv6 preferred, IPv4 fallback)
- Each WebSocket message carries one complete SBI frame
- Binary mode only – text frames are rejected
Peer Table
The daemon maintains a 64-entry peer table:
| Field | Size | Purpose |
|---|---|---|
| peer_id | 32 bytes | BLAKE3 CellID from HANDSHAKE |
| address | variable | IPv6 (or IPv4) socket address |
| capabilities | 4 bytes | Negotiated capability intersection |
| state | 1 byte | Connected, handshaking, disconnected |
| last_seen | 8 bytes | Timestamp for liveness tracking |
Peers that go silent are marked disconnected after a timeout. Their slot is reusable but not eagerly reclaimed – a peer that reconnects within the window resumes without a full handshake.
IPv6 Support
The daemon binds to AF_INET6 by default with IPV6_V6ONLY=0, accepting both IPv6 and IPv4-mapped connections. IPv6 is the preferred address family for mesh peers – especially over Yggdrasil, where all addresses are IPv6.
Lifecycle
1. Boot → derive CellID from MAC (FNV-1a)
2. Bind WebSocket listener on configured port
3. Query yggdrasilctl for Yggdrasil peers
4. Connect to known peers (static + discovered)
5. HANDSHAKE with each peer
6. Enter event loop: accept connections, dispatch SBI frames, handle CAS requests
7. Periodic: re-query Yggdrasil peers, prune stale entries, retry disconnected peersRelationship to Other Components
| Component | Relationship |
|---|---|
| UTCP-SBI | Wire protocol – message formats, envelope, op codes |
| UTCP | Sovereign transport – CellID addressing, state machine, ION rings |
| LWF | Encryption layer – LWF v3 encrypts mesh traffic when carried over Libertaria wire |
| Gateway | Routing – CellID table enables cross-cell block exchange |
| NexFS CAS | Storage backend – blocks are stored/retrieved via nexfs_cas_get()/nexfs_cas_put() C-FFI |
| Yggdrasil | Peer discovery and encrypted IPv6 overlay |