Skip to content

Gateway

Kernel Routing — Production

The Gateway subsystem provides CellID-based packet routing at the kernel level. It enables Nexus nodes to forward traffic between cells – acting as mesh routers that bridge UTCP and LWF traffic across network boundaries.

CellID Address Resolution Table

The kernel maintains a 64-entry static table that maps CellIDs to Ethernet MAC addresses and LWF routing hints:

CellID Table (64 entries, kernel-safe):

┌────────────────┬────────────┬──────────┬───────┐
│ cell_id (16B)  │ mac (6B)   │ hint (2B)│ flags │
├────────────────┼────────────┼──────────┼───────┤
│ 0xA3F1...      │ 02:00:...  │ 0x0001   │ valid │
│ 0xB7E2...      │ 52:54:...  │ 0x0003   │ valid │
│ ...            │ ...        │ ...      │ ...   │
│ (empty)        │ 00:00:...  │ 0x0000   │ free  │
└────────────────┴────────────┴──────────┴───────┘
FieldSizePurpose
cell_id16 bytesCellID (FNV-1a of MAC at boot; SipHash-128 when HAL crypto wired)
mac_addr6 bytesDestination Ethernet MAC for this CellID
lwf_hint2 bytesLWF routing hint (used by RELAY_FORWARD)
flags1 byteEntry state: free, valid, stale, relay

Population

The table is populated through three mechanisms:

  1. Boot – The local node's own CellID is derived from its MAC via FNV-1a and registered as entry 0
  2. UTCP handshake – When a UTCP connection reaches ESTABLISHED, the remote CellID is registered with the source MAC from the Ethernet frame header
  3. LWF source hints – Incoming LWF frames with valid src_cellid and src_hint auto-register if the CellID is not already in the table

No dynamic allocation. No resizing. The 64-entry cap is deliberate – a single node routes within its Chapter. Federation-scale routing uses DAG-based overlay protocols, not flat kernel tables.

Lookup

CellID resolution is a linear scan – 64 entries, each 25 bytes. At these sizes, a linear scan completes in fewer cycles than a hash table lookup (no hashing overhead, cache-line friendly). If the table grows beyond 64 in the future, it switches to a hash map. That future is not today.

LWF RELAY_FORWARD

The RELAY_FORWARD service provides kernel-level frame forwarding for LWF traffic. When a node receives an LWF frame with the RELAY flag (0x02) set and a dst_cellid that is not its own:

1. NetSwitch delivers LWF frame to LWF adapter
2. LWF adapter reads dst_cellid from header
3. dst_cellid != local_cellid AND RELAY flag is set
4. Look up dst_cellid in CellID table
5. If found:
   a. Rewrite Ethernet destination MAC from table entry
   b. Update hop hint in LWF header (increment)
   c. Place frame on VirtIO TX ring
6. If not found:
   a. Drop frame
   b. Increment unknown_relay counter

What the Kernel Does NOT Do

  • No decryption – The kernel forwards encrypted payloads as opaque bytes
  • No signature verification – That is the receiver's responsibility
  • No payload inspection – The kernel reads the 88-byte LWF header and nothing else
  • No TTL or hop limit – The hop hint is advisory, not enforced (the sender sets bounds)

The kernel is a packet forwarder. It reads addresses and moves bytes. Everything else happens in userland.

Hop Hints

The dst_hint field in the LWF header carries routing metadata that intermediate nodes can use to make forwarding decisions. A relay node increments the hint on each forward – this is advisory, allowing the eventual receiver to estimate path length. It is not a TTL; frames are not dropped based on hint value.

UTCP-to-LWF Bridge

A UTCP node that wants to reach a CellID on a different network segment can route through a gateway node that speaks both protocols:

┌──────────┐    UTCP     ┌──────────┐     LWF      ┌──────────┐
│  Node A  ├────────────→│ Gateway  ├──────────────→│  Node B  │
│ (UTCP)   │  0x88B5     │ (both)   │  0x4C57      │ (LWF)    │
└──────────┘             └──────────┘               └──────────┘

The gateway node:

  1. Receives a UTCP frame addressed to a CellID in its table
  2. Looks up the CellID – finds it is reachable via LWF (relay flag in table)
  3. Wraps the UTCP payload in an LWF frame (sets RELAY flag, populates dst_cellid)
  4. Forwards via LWF RELAY_FORWARD

This bridge is transparent to both Node A and Node B. Node A sends UTCP; Node B receives LWF. The gateway handles the translation at the kernel level.

Chapter Mesh Router

A Chapter is a local cluster of Nexus nodes (up to 64 in the current addressing scheme). The gateway subsystem enables any Chapter member to act as a mesh router:

Chapter Mesh (example topology):

     ┌──── Node 1 ────┐
     │                 │
  Node 2 ─── Gateway ─── Node 3
     │                 │
     └──── Node 4 ────┘

In this topology, the Gateway node has all four CellIDs in its table and can forward traffic between any pair. Nodes 1–4 only need to know the Gateway's CellID – they do not need direct connectivity to each other.

Multi-Hop

For Chapters that span multiple network segments, multiple gateway nodes can chain:

Node A → Gateway 1 → Gateway 2 → Node B

Each gateway performs a single CellID lookup and forward. The LWF hop hint increments at each hop, giving the receiver visibility into path length.

Federation Scale

The 64-entry CellID table is sized for Chapter-scale routing. Federation-scale (hundreds or thousands of nodes) requires overlay routing – DAG-based routing tables exchanged via DAG_SYNC, stored in NexFS, and consulted by the mesh daemon in userland. The kernel table handles the local hop; the overlay handles the global path.

Security Considerations

  • No kernel crypto – The gateway forwards encrypted frames without decryption. A compromised gateway cannot read payload content.
  • CellID spoofing – A node that fabricates a CellID can inject frames, but the LWF Ed25519 signature (verified by the receiver in userland) catches forgeries. The kernel does not verify signatures – but the receiver does.
  • Table exhaustion – The 64-entry cap means an attacker can fill the table with bogus CellIDs. Stale entry cleanup (30-second timeout, matching UTCP PCB cleanup) limits exposure. Persistent attackers require rate limiting at the VirtIO layer.
  • Relay amplification – A node could set the RELAY flag on frames to force a gateway to forward traffic. The gateway does not enforce TTL, but the LWF hop hint and receiver-side signature verification bound the blast radius.

Relationship to Other Components

ComponentRelationship
UTCPCellID table is shared – UTCP handshakes populate it
LWFRELAY_FORWARD uses LWF v3 framing and the RELAY flag
Mesh TransferBlock exchange across cells routes through gateways
NetSwitchGateway routing is an extension of NetSwitch L2 demux
VirtIOForwarded frames are re-queued to the VirtIO TX ring