Skip to content

UTCP — Unikernel Transport Control Protocol

Production — Kernel Native

UTCP is the Nexus sovereign transport protocol. It replaces TCP/IP for Nexus-to-Nexus communication with an identity-centric, NACK-based protocol that is immune to entire classes of network attacks. UTCP packets route to dedicated ION rings and are processed by a complete kernel-native state machine.

Why Not TCP?

TCP was designed in the 1970s for a network of trusted academic institutions. It has fundamental problems:

TCP ProblemUTCP Solution
IP addresses change (DHCP, NAT)Nodes identified by CellID (FNV-1a of MAC; SipHash-128 when HAL crypto is wired)
Open ports are scannableNo ports. NetSwitch drops unknown CellIDs
ACK-heavy (every packet acknowledged)NACK-based (only missing packets reported)
Connection state in kernelAll state in userland fiber + 16-entry kernel PCB table
DNS dependency for name resolutionCellID is the identity – no DNS needed

How UTCP Works

Identity-Centric Addressing

Every Nexus node has a CellID – a 128-bit identity derived from its MAC address via FNV-1a hash at boot time. This is the bootstrap identity; once HAL crypto is fully wired, CellID derivation upgrades to SipHash-128 of the Ed25519 public key. The upgrade is transparent to the protocol – CellID width and semantics do not change.

sender_cellid → receiver_cellid

If a node moves to a different network, gets a new IP, or goes through NAT – the CellID stays the same. The connection continues.

CellID Table

The kernel maintains a 64-entry static CellID table that maps CellIDs to their corresponding MAC addresses and LWF routing hints:

FieldSizePurpose
cell_id16 bytesCellID (FNV-1a of MAC)
mac_addr6 bytesEthernet MAC address
lwf_hint2 bytesLWF routing hint for gateway forwarding
flags1 byteEntry state (valid, stale, relay)

The table is populated automatically:

  • At boot, the local CellID is derived from the node's own MAC
  • On UTCP handshake, the remote CellID is registered with the source MAC
  • LWF RELAY_FORWARD frames auto-register source hints

No heap allocation. No dynamic resizing. 64 entries is the hard cap – sufficient for a Chapter mesh. Federation-scale routing uses DAG-based overlay routing, not kernel tables.

EtherType Fork

The NetSwitch uses the Ethernet frame's EtherType to distinguish traffic:

  • 0x0800 (IPv4) / 0x86DD (IPv6) → Routed to LwIP in the Membrane
  • 0x88B5 (UTCP) → Routed to the UTCP ION ring (chan_utcp_rx)
  • 0x4C57 (LWF) → Routed to the LWF handler fiber

Legacy TCP/IP and sovereign UTCP coexist on the same wire. No tunneling. No encapsulation overhead.

NACK-Based Reliability

TCP acknowledges every packet. This wastes bandwidth when the network is reliable (which it is, most of the time).

UTCP inverts this: the receiver only sends a NACK (negative acknowledgment) when it detects a missing sequence number. On a healthy network, no control messages flow. On a lossy network, only the missing packets are retransmitted.

DDoS Immunity

The NetSwitch drops packets addressed to unknown CellIDs at L2. There is no "listening port" to discover. You cannot port-scan a UTCP node because:

  1. There are no ports
  2. The CellID must be known before communication can begin
  3. Unknown CellIDs are dropped before they reach any userland code

State Machine

UTCP implements a complete connection state machine in the kernel:

    ┌──────────┐
    │  CLOSED  │
    └────┬─────┘
         │ connect()
         v
    ┌──────────┐                    ┌──────────┐
    │ SYN_SENT ├───── SYN ─────────→│ SYN_RCVD │
    └────┬─────┘                    └────┬─────┘
         │                               │
         │←───── SYN-ACK ────────────────┘

         │──────── ACK ──────────────────→
         v
    ┌──────────────┐
    │ ESTABLISHED  │  ←── DATA flows here
    └──────┬───────┘
           │ close()
           v
    ┌──────────┐
    │ FIN_WAIT │──── FIN ────────────→ CLOSED
    └──────────┘

State transitions:

Current StateEventNext StateAction
CLOSEDconnect()SYN_SENTSend SYN, start 30s timer
SYN_SENTrecv SYN-ACKESTABLISHEDSend ACK, register CellID
CLOSEDrecv SYNSYN_RCVDSend SYN-ACK
SYN_RCVDrecv ACKESTABLISHEDConnection ready
ESTABLISHEDrecv DATAESTABLISHEDDeliver to ION ring
ESTABLISHEDclose()FIN_WAITSend FIN
ESTABLISHEDrecv FINCLOSEDSend ACK, clean up PCB
FIN_WAITrecv ACKCLOSEDClean up PCB
Any30s timeoutCLOSEDClean up PCB

The 16-entry PCB (Protocol Control Block) table tracks active connections. A 30-second timeout sweeps stale entries – if a connection does not complete the handshake or goes silent, the PCB slot is reclaimed.

UTCP ION Rings

UTCP packets route to dedicated ION rings – separate from the per-process proc_rx/proc_tx rings used by the Membrane:

ION RingDirectionPurpose
chan_utcp_rxInboundNetSwitch delivers UTCP frames here
chan_utcp_txOutboundUTCP fiber sends frames to VirtIO

TX Path

  1. Application calls SYS_UTCP_SEND with payload + destination CellID
  2. Kernel composes UTCP header (src/dst CellID, seq, flags, payload_len)
  3. Frame is placed on chan_utcp_tx
  4. VirtIO driver transmits the Ethernet frame

RX Path

  1. VirtIO delivers frame to NetSwitch
  2. NetSwitch reads EtherType 0x88B5, places frame on chan_utcp_rx
  3. UTCP fiber reads frame, processes state machine
  4. DATA payloads are delivered to the destination application's ION ring

Syscalls

SyscallNumberPurpose
SYS_UTCP_RECV0x702Read a UTCP payload from the receive ring
SYS_UTCP_SEND0x703Send a UTCP payload to a destination CellID

Both syscalls are gated by PLEDGE_INET and require valid CSpace capabilities for the UTCP ION rings.

Packet Format

UTCP uses a minimal header:

FieldSizePurpose
src_cellid16 bytesSender identity
dst_cellid16 bytesReceiver identity
seq4 bytesSequence number
flags2 bytesControl flags (SYN, SYN-ACK, ACK, FIN, DATA, NACK)
payload_len2 bytesPayload length
payloadvariableApplication data

Total overhead: 40 bytes per packet. TCP+IP headers are 40-60 bytes.

PCB — Protocol Control Block

Each active UTCP connection occupies one slot in the 16-entry PCB table:

FieldSizePurpose
state1 byteConnection state (CLOSED, SYN_SENT, ESTABLISHED, etc.)
remote_cellid16 bytesPeer CellID
local_seq4 bytesNext sequence number to send
remote_seq4 bytesNext expected sequence number
last_active8 bytesTimestamp for timeout cleanup
dkh_trust8 bytesReserved – Distributed Key Hierarchy trust level

The dkh_trust field is reserved for the future Distributed Key Hierarchy integration. When DKH is wired, UTCP connections will carry cryptographic trust attestations that bind CellID to a verified position in the key hierarchy. Until then, the field is zeroed.

Integration with NexFS

UTCP provides the transport layer for NexFS mesh storage (SPEC-704):

  • BLOCK_WANT / BLOCK_PUT messages flow over UTCP
  • DAG_SYNC uses UTCP for peer-to-peer DAG head synchronization
  • CellID-based addressing means storage peers can migrate between networks without losing sync state

Tensor Extensions

For AI workloads, UTCP includes tensor extensions (SPEC-702) that provide soft-RDMA capabilities:

  • Direct memory registration for large tensor transfers
  • Zero-copy scatter/gather for distributed training
  • Priority scheduling for gradient synchronization traffic

These extensions are only active when the nexfs_cluster or nexfs_federation build flags are enabled.