Rumpk vs. RumKV
The two names differ by one letter. The systems differ by everything else.
The Short Version
| Rumpk | RumKV | |
|---|---|---|
| What it is | The kernel | The hypervisor |
| Privilege level | S-Mode (ARM: EL1, x86: Ring 0) | M-Mode (ARM: EL2, x86: Ring -1) |
| Size | 280 KB | < 5 KB active code |
| Language | Nim (logic) + Zig (HAL) | Zig |
| Concurrency | Fibers, ION Rings, scheduler | None — stateless after boot |
| Runs when | Always — event-driven ISR | Only during traps/faults/hypercalls |
| Security model | Capabilities, Pledge, Kinetic Economy | Stage-2 page tables, interrupt routing |
| Required | Yes — it is the OS | No — optional isolation layer |
What Rumpk Does
Rumpk is the operating system. It owns:
- Scheduling — Harmonic 4-spectrum model (Photon / Matter / Gravity / Void)
- Memory — Cellular partitioning, PMP/MPU enforcement
- IPC — ION Ring lock-free buffers between fibers
- VFS — TAR, LittleFS, NexFS mount points
- Networking — NetSwitch L2 demux + LwIP in userland Membrane
- Drivers — NPL fibers with automatic crash recovery (Blink model)
- Security — Capability algebra (7 verbs), Pledge/Unveil, ProvChain audit
When there's nothing to do, Rumpk executes WFI and the CPU sleeps. When a hardware interrupt fires, the scheduler wakes, processes the event batch, and returns to silence. This is the Silence Doctrine: a healthy kernel is a quiet kernel.
Key Design Choices
- Single address space — no process isolation overhead, no TLB flushes
- 12 frozen syscalls — minimal attack surface, stable ABI forever
- Fibers, not threads — cooperative scheduling, no race conditions by construction
- Event-driven, not tick-driven — no 1ms timer interrupt wasting power
What RumKV Does
RumKV is a spatial partitioner. Think of it as a bouncer that assigns rooms and then disappears.
At boot, RumKV:
- Sets up Stage-2 page tables (memory isolation between cells)
- Configures virtual interrupt routing
- Assigns physical CPU cores to cells (static, no time-sharing)
- Loads Rumpk into the first cell
- Executes
hvc_returnand vanishes
After boot, RumKV has no run loop. It exists only as a trap handler — awakened when:
- A cell tries to access memory outside its partition → fault → cell terminated
- A cell issues a hypercall (
hvc #0x4E58) → typically to tighten its own pledges - A hardware interrupt needs routing to the correct cell
RumKV does not schedule, allocate memory, manage filesystems, or run any logic. It is stateless by design.
The Dual-Pledge Model
This is the novel part. Security is enforced at both levels simultaneously:
Layer 1: Hard Pledges (RumKV)
Enforced by hardware. If a cell pledges "compute only", RumKV unmaps all network controllers and disk controllers from that cell's Stage-2 page tables.
Result: Even if Rumpk is compromised inside that cell, it physically cannot address the network card. The hardware enforces the boundary, not software.
- Mechanism: Stage-2 page tables + SMMU/IOMMU
- Enforcement: Immediate cell termination on violation
- Direction: One-way ratchet — cells can only tighten pledges, never loosen
Layer 2: Soft Pledges (Rumpk)
Enforced by kernel logic. Capability algebra, Pledge/Unveil, and the Kinetic Economy all operate at this level.
- Mechanism: Capability tokens, energy budgets, pledge declarations
- Enforcement: Fiber termination, resource throttling
- Direction: Same one-way ratchet — fibers can only narrow permissions
Why Both?
| Threat | Single Layer | Dual-Pledge |
|---|---|---|
| Compromised kernel | Full system access | Confined to cell's hardware partition |
| Compromised fiber | May escalate within kernel | Capped by both kernel AND hardware |
| Rogue driver | Could access all devices | Only devices mapped to its cell |
| Fork bomb | CPU exhaustion | Kinetic Economy + core partition limits |
Defense-in-depth. The kernel can be wrong. The hardware cannot.
When Is RumKV Optional?
RumKV is not required for single-tenant deployments:
| Profile | RumKV Present? | Reason |
|---|---|---|
| Nexus Tiny (MCU) | No | No MMU, no hypervisor possible |
| Nexus Micro (embedded) | No | Single cell, Rumpk runs bare-metal |
| Nexus Unikernel | No | Single application, Rumpk is the OS |
| Nexus Core (workstation) | Yes | Multi-cell isolation for desktop security |
| Nexus Fleet (cluster) | Yes | Per-node cell isolation, tenant separation |
| DragonBox (enterprise) | Yes | Multi-guest with NetBSD verified |
When RumKV is absent, Rumpk runs at the highest privilege level and provides all isolation through software (capabilities + PMP). When RumKV is present, Rumpk doesn't know — the SysTable ABI is identical either way.
The Boot Sequence
┌─────────────────────────────────────────┐
│ nexus-boot (Limine fork, <300 LOC) │
│ Loads hypervisor + kernel into RAM │
├─────────────────────────────────────────┤
│ RumKV (EL2/Ring-1/M-Mode) │
│ Sets up Stage-2 tables, assigns cores │
│ Then: hvc_return → vanishes │
├─────────────────────────────────────────┤
│ Rumpk (EL1/Ring 0/S-Mode) │
│ Scheduler, VFS, networking, drivers │
│ Sees clean hardware interface │
├─────────────────────────────────────────┤
│ NPL/NPK Applications (EL0/Ring 3) │
│ Sandboxed fibers with capabilities │
└─────────────────────────────────────────┘Naming
| Name | Origin | Pronunciation |
|---|---|---|
| Rumpk | "Rump kernel" — the essential core | RUMP-k |
| RumKV | Rumpk + KVM inspiration (but not KVM) | RUM-kv |
RumKV was inspired by KVM's approach (minimal host-side code, hardware does the work) but shares no code, no architecture, and no design philosophy with KVM. KVM is a Linux module. RumKV is a standalone Type-1 hypervisor with no OS underneath it.
Verified Guests
RumKV has been verified with:
- Rumpk — the standard deployment (transparent integration)
- NetBSD 10.1 — ARM64 guest, boots and runs full userland
Future guest support is planned for OpenBSD and DragonflyBSD as part of the multi-distribution strategy (NexBox, OpenBox, DragonBox).
What RumKV Is Not
- Not a VM manager — no live migration, no oversubscription, no management API
- Not an emulator — no x86-on-ARM, no device emulation
- Not a network virtualizer — no virtual switches between cells
- Not KVM — no Linux dependency, no dynamic scheduling, no host OS
RumKV does one thing: spatial partitioning with hardware-enforced pledges. Everything else is Rumpk's job.