Skip to content

Mesh Transfer

MVP Shipped

Mesh Transfer is the content-addressed block exchange system for Nexus OS. It moves CAS (Content-Addressable Storage) blocks between nodes over WebSocket, with BLAKE3 integrity verification, compression negotiation, and DAG synchronization. This is how NexFS data replicates across the mesh.

Architecture

┌─────────────────────────────────────────────────────────┐
│  Mesh Daemon                                            │
│  WebSocket listener (IPv6, AF_INET6)                    │
│  Peer table (64 entries)                                │
├─────────────────────────────────────────────────────────┤
│  SbiTransport                                           │
│  WebSocket binary frames → SBI codec → frame dispatch   │
│  Capability negotiation (HANDSHAKE)                     │
├─────────────────────────────────────────────────────────┤
│  MeshTransfer                                           │
│  BLOCK_WANT / BLOCK_PUT / DAG_SYNC handlers             │
│  Wired to CasManager                                    │
├─────────────────────────────────────────────────────────┤
│  CasManager                                             │
│  BLAKE3-addressed get/put                               │
│  NexFS CAS backend (via C-FFI)                          │
├─────────────────────────────────────────────────────────┤
│  Compression Codec                                      │
│  Zstd / Deflate / Passthrough                           │
└─────────────────────────────────────────────────────────┘

Each layer has a single responsibility:

  • Mesh Daemon – Accepts connections, manages peer lifecycle, discovers peers
  • SbiTransport – Frames bytes into SBI messages, dispatches by op code
  • MeshTransfer – Business logic: CAS lookups, DAG walks, block verification
  • CasManager – Storage backend: content-addressed get/put with BLAKE3 keys
  • Compression Codec – Transparent compress/decompress for BLOCK_PUT payloads

No shared mutable state between layers. Each handler is a pure function of its inputs plus CAS state.

Block Exchange Flow

The core exchange is simple: one node wants a block, the other has it.

Requester                              Provider
─────────                              ────────
    │                                      │
    │── HANDSHAKE(caps, features) ────────→│
    │←─ HANDSHAKE(caps, features) ─────────│
    │   [negotiate compression, verify]    │
    │                                      │
    │── BLOCK_WANT(blake3_hash, pri) ─────→│
    │                                      │
    │   [Provider: CasManager.get(hash)]   │
    │   [Provider: compress(data, algo)]   │
    │                                      │
    │←─ BLOCK_PUT(hash, data, comp) ───────│
    │                                      │
    │   [Requester: decompress(data)]      │
    │   [Requester: verify BLAKE3]         │
    │   [Requester: CasManager.put(data)]  │
    │                                      │
    │── ACK(seq) ─────────────────────────→│
    │                                      │

Verification

Every received block is verified before storage:

  1. Decompress the payload (using comp_algo from the BLOCK_PUT spine)
  2. Compute BLAKE3(decompressed_data)
  3. Compare against block_hash from the BLOCK_PUT spine
  4. If mismatch: send NACK("hash_mismatch"), discard data
  5. If match: CasManager.put(data) stores the block

No block enters CAS without passing BLAKE3 verification. Content integrity is non-negotiable.

DAG Synchronization

DAG sync determines which blocks a peer is missing without transferring the blocks themselves. It uses Merkle tree comparison:

Node A                                 Node B
──────                                 ──────
    │                                      │
    │── DAG_SYNC(root_hash, depth, ───────→│
    │           local_node_hashes)         │
    │                                      │
    │   [Node B: compare against local]    │
    │   [Node B: compute missing set]      │
    │                                      │
    │←─ DAG_SYNC(root_hash, depth, ────────│
    │           missing_node_hashes)       │
    │                                      │
    │── BLOCK_PUT(cid_1, data) ───────────→│
    │── BLOCK_PUT(cid_2, data) ───────────→│
    │── BLOCK_PUT(cid_n, data) ───────────→│
    │←─ ACK(seq) ──────────────────────────│
    │                                      │

The depth field bounds traversal – a peer does not walk deeper than depth levels into the DAG. This prevents unbounded computation when syncing large trees. The typical sync pattern:

  1. Exchange root hashes to check if trees diverge
  2. If roots differ, exchange node hashes at increasing depth
  3. Once the missing set is identified, transfer blocks via BLOCK_PUT
  4. Repeat until roots converge

Compression Negotiation

Compression is negotiated once during HANDSHAKE and applied per-block in BLOCK_PUT:

StepDetail
1. HANDSHAKEBoth peers declare capabilities (HAS_ZSTD, HAS_DEFLATE)
2. IntersectionSender computes local.caps & remote.caps
3. SelectionSender picks best mutual algorithm (Zstd > Deflate > None)
4. Per-blockEach BLOCK_PUT carries comp_algo + comp_level in its spine
5. ReceiverReads comp_algo, decompresses, then verifies BLAKE3

The sender may choose different algorithms for different blocks – a small block might skip compression entirely (comp_algo = 0x00) while a large block uses Zstd at level 3. The comp_algo field in each BLOCK_PUT is authoritative.

Codec Details

AlgorithmDefault LevelUse Case
Zstd3General purpose – good ratio, low latency
Deflate6Graf interop where Zstd is unavailable
NoneAlready-compressed content, tiny blocks

BLAKE3 Content Addressing

All content in the mesh is addressed by BLAKE3-256 – 32-byte digests computed via C FFI to the reference BLAKE3 library:

PropertyValue
AlgorithmBLAKE3 (C library, not BLAKE2b)
Digest size32 bytes (256 bits)
Performance~3x faster than SHA-256, parallelizable
UseBlock CIDs, DAG node hashes, schema fingerprints

CLD Schema Fingerprints

Every SBI message carries a schema fingerprint in its preamble – a BLAKE3 hash of the canonical CLD (Container Layout Descriptor). This binds each wire message to its exact struct layout. If a peer upgrades its schema, the fingerprint changes, and the remote peer can detect the mismatch before misinterpreting fields.

fingerprint = BLAKE3(canonical_cld_bytes)

This is schema evolution without silent corruption. A receiver that sees an unknown fingerprint sends NACK("unknown_schema") instead of parsing garbage.

Peer Discovery

Yggdrasil

The mesh daemon discovers peers via Yggdrasil – an encrypted IPv6 overlay network. On boot, the daemon queries yggdrasilctl for the local node's peers and their IPv6 addresses:

yggdrasilctl -json getPeers

Discovered peers are added to the peer table and contacted via WebSocket. Yggdrasil provides the encrypted transport; the mesh daemon provides the block exchange protocol on top.

Static Peers

Peers can also be configured statically in the node's configuration. Static peers are contacted on startup and maintained across restarts.

Mesh Daemon

The mesh daemon is the application-layer process that runs the mesh transfer protocol.

Transport

  • WebSocket binary frames over TCP (IPv6 preferred, IPv4 fallback)
  • Each WebSocket message carries one complete SBI frame
  • Binary mode only – text frames are rejected

Peer Table

The daemon maintains a 64-entry peer table:

FieldSizePurpose
peer_id32 bytesBLAKE3 CellID from HANDSHAKE
addressvariableIPv6 (or IPv4) socket address
capabilities4 bytesNegotiated capability intersection
state1 byteConnected, handshaking, disconnected
last_seen8 bytesTimestamp for liveness tracking

Peers that go silent are marked disconnected after a timeout. Their slot is reusable but not eagerly reclaimed – a peer that reconnects within the window resumes without a full handshake.

IPv6 Support

The daemon binds to AF_INET6 by default with IPV6_V6ONLY=0, accepting both IPv6 and IPv4-mapped connections. IPv6 is the preferred address family for mesh peers – especially over Yggdrasil, where all addresses are IPv6.

Lifecycle

1. Boot → derive CellID from MAC (FNV-1a)
2. Bind WebSocket listener on configured port
3. Query yggdrasilctl for Yggdrasil peers
4. Connect to known peers (static + discovered)
5. HANDSHAKE with each peer
6. Enter event loop: accept connections, dispatch SBI frames, handle CAS requests
7. Periodic: re-query Yggdrasil peers, prune stale entries, retry disconnected peers

Relationship to Other Components

ComponentRelationship
UTCP-SBIWire protocol – message formats, envelope, op codes
UTCPSovereign transport – CellID addressing, state machine, ION rings
LWFEncryption layer – LWF v3 encrypts mesh traffic when carried over Libertaria wire
GatewayRouting – CellID table enables cross-cell block exchange
NexFS CASStorage backend – blocks are stored/retrieved via nexfs_cas_get()/nexfs_cas_put() C-FFI
YggdrasilPeer discovery and encrypted IPv6 overlay