NexFS vs. The Field
Comprehensive Comparison
NexFS is a clean-sheet filesystem that covers a range no single competitor matches – from a microcontroller with 8.5 KB RAM to a 352 TB Homenode to planetwide sovereign mesh. Since v4's u64 address widening, a single NexFS volume addresses 64 ZB – exceeding ext4, XFS, Btrfs, and HAMMER2 in raw addressable capacity while remaining the only filesystem that runs on bare metal without an OS.
This page is a thorough, honest comparison across every axis that matters.
Lineage & Design Philosophy
| Filesystem | Lineage | Primary Target | Design Philosophy |
|---|---|---|---|
| NexFS | Clean-sheet (2024), Zig | MCU → Homenode → sovereign mesh | Content-addressed, flash-native, zero-trust, zero-dependency, u64 scale |
| ext4 | ext2→ext3→ext4 (2008), C | General-purpose block devices | Conservative evolution; backward-compatible; "good enough" |
| Btrfs | Oracle (2009), C | Linux desktop/server | CoW B-tree; feature-rich; Linux-native ZFS alternative |
| F2FS | Samsung (2012), C | FTL-based flash (eMMC, SSD) | Log-structured; FTL-aware; mobile-optimised |
| XFS | SGI (1993→Linux 2001), C | Large files, high throughput | Allocation groups; parallel I/O; metadata journaling |
| HAMMER2 | DragonFlyBSD (2017), C | DragonFlyBSD clusters | CoW B-tree; content-hash dedup; multi-volume clustering |
| bcachefs | Kent Overstreet (2015→mainline 2024), C | Linux next-gen | bcache lineage; CoW B-tree; "XFS + Btrfs done right" |
| ZFS | Sun (2005→OpenZFS), C | Enterprise storage | Pooled storage; end-to-end checksums; industrial-grade CoW |
Storage Model
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Allocation | BAM + wear-leveled blocks | Bitmap + extents | CoW B-tree | Log-structured segments | B+tree + allocation groups | CoW B-tree + freemap | CoW B-tree + buckets | Slab + metaslab |
| Copy-on-Write | Yes (deep clone) | No | Yes (native) | No (in-place updates) | No (reflink via XFS 5.1+) | Yes (native) | Yes (native) | Yes (native) |
| Block Size | 512B–64KB (configurable) | 1–4KB | 4KB–64KB (nodesize) | 4KB | 512B–64KB | 64KB default | 512B–64KB | 128KB default (recordsize) |
| Extent-based | Yes (inline + overflow) | Yes (since ext4) | Yes (via B-tree) | Yes (multi-level index) | Yes (B+tree extents) | Yes (via B-tree) | Yes (via B-tree) | Yes (indirect block pointer tree) |
| Flash-Aware | Native – raw NAND/NOR/SPI | No – relies on FTL | No – relies on FTL | FTL-aware – respects erase boundaries | No | No | No | No |
Verdict: NexFS and F2FS are the only two designed with flash in mind – but they solve fundamentally different problems. F2FS works with the FTL; NexFS replaces it. On block devices, NexFS's BAM allocator and CoW model work identically – flash-awareness is additive, not exclusive.
Integrity & Checksumming
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Data Checksums | BLAKE3-256 (CAS inherent) | No (metadata only) | CRC32C (default) | No (metadata CRC32 only) | CRC32C (metadata v5) | SHA256/XXH64 | CRC32C/XXH64/POLY1305 | Fletcher-4 / SHA-256 |
| Metadata Checksums | BLAKE3-256/128 or XXH3-64 | CRC32C (since 2012) | CRC32C | CRC32 | CRC32C | SHA256/XXH64 | CRC32C/XXH64 | Fletcher-4 / SHA-256 |
| Self-Healing | Dual + scattered SB fallback (up to 10 replicas) | No | Yes (with RAID) | No | No | Yes (with mirrors) | Yes (with replication) | Yes (with redundancy) |
| Scrub Support | Yes (full-volume sweep) | No | Yes | No | No (xfs_scrub in 5.x+) | Yes | Yes | Yes |
| End-to-End Verification | Yes – every read verified | No | Partial (CoW path only) | No | No | Yes | Yes | Yes |
Verdict: NexFS, ZFS, bcachefs, and HAMMER2 are the only ones with true end-to-end data integrity. ext4 and F2FS are shockingly bare here – silent corruption is undetectable. NexFS uses BLAKE3 – cryptographically strong, not just error-detection CRC.
Deduplication & Content Addressing
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Dedup Type | Inline CAS (write-time) | None | Offline (slow, RAM-hungry) | None | None (reflink only) | Inline (content-hash) | Background (planned) | Inline or offline (RAM-hungry) |
| Content-Addressed | Yes – CID = BLAKE3(chunk) | No | No | No | No | Yes – content-hash | No | No (dedup uses DDT, not CAS) |
| Content-Defined Chunking | FastCDC (16–64–256 KB) | No | No | No | No | 64KB fixed | No | Fixed recordsize |
| Dedup Efficiency | ~90–95% for similar data | N/A | ~50–80% (post-process) | N/A | N/A | Good (fixed chunk) | TBD | Good but RAM-expensive (~320B/block) |
Verdict: NexFS and HAMMER2 are the only two with native content-addressed inline dedup. ZFS dedup exists but is infamously RAM-hungry (5+ GB per TB). Btrfs dedup is a painful afterthought. NexFS's FastCDC variable chunking is strictly superior to HAMMER2's fixed 64KB chunks for delta efficiency. On a 352 TB Homenode, inline CAS dedup at write-time means storage efficiency scales without operational overhead.
Snapshots & Versioning
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Snapshots | TimeWarp – Merkle DAG, 128B cost | No (LVM needed) | Yes (CoW subvolumes) | No (checkpoint only) | No (LVM needed) | Yes (CoW PFS) | Yes (CoW snapshots) | Yes (CoW datasets) |
| Snapshot Cost | ~128 bytes (DAG node) | N/A | Cheap (CoW) | N/A | N/A | Cheap (CoW) | Cheap (CoW) | Cheap (CoW) |
| Atomic Rollback | Yes – single Root Register write | No | Yes (send/receive) | No | No | Yes | Yes | Yes (rollback) |
| Snapshot Diff | Native DAG diff | N/A | btrfs send --no-data | N/A | N/A | hammer2 diff | Planned | zfs diff |
| Chained History | Yes – snapshot→parent CID chain | N/A | Flat (independent) | N/A | N/A | Yes | Flat | Flat |
Verdict: All CoW filesystems do snapshots cheaply. NexFS's Merkle DAG approach gives it cryptographic provenance and chain-of-custody that the others lack – a snapshot isn't just a point-in-time; it's a verifiable commitment.
Compression
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Algorithms | ZSTD (1–22) + RLE | None | ZLIB, LZO, ZSTD | LZ4, ZSTD (Android 12+) | None | LZ4, ZLIB | LZ4, GZIP, ZSTD | LZ4, GZIP, ZSTD, LZ4HC |
| Granularity | Per-DAG-node + per-CAS-chunk | N/A | Per-file / per-subvolume | Per-file | N/A | Per-PFS | Per-file / per-subvolume | Per-dataset |
| Level Selection | Per-node and per-bucket | N/A | Per-subvolume | N/A | N/A | N/A | N/A | Per-dataset |
| Integrity | Double checksum (XXH3 + BLAKE3) | N/A | CRC32C on CoW | N/A | N/A | SHA256 | CRC32C | Fletcher-4 |
Verdict: NexFS v1.1.0 closes the compression gap. ZSTD levels 1–22 via vendored libzstd, with four granularity modes (per-bucket, per-DAG-node, per-CAS-chunk, per-DAG+chunk). Per-node level selection means ZSTD:2 on hot system paths and ZSTD:18 on cold archives – on the same volume. No other filesystem offers per-file compression level control at the storage layer. Double-checksum integrity (XXH3-64 pre-decompression gate piped during streaming compress + BLAKE3 post-verify) catches bit flips before ZSTD touches the data. RLE retained for Core profile where ZSTD's frame overhead is counterproductive on 512-byte blocks and radiation-induced bit flips can destroy entire compressed frames.
Encryption
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| At-Rest | Planned (XChaCha20-Poly1305) | fscrypt (per-file) | No (dm-crypt only) | fscrypt (per-file) | No (dm-crypt only) | No | Chacha20/AES-256 (native) | Native (since OpenZFS 2.0) |
| Per-File | Planned | Yes (fscrypt) | No | Yes (fscrypt) | No | No | Yes | No (per-dataset) |
| Transit | Monolith-derived keys (mesh) | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Verdict: bcachefs and ZFS have native encryption. ext4/F2FS leverage fscrypt. NexFS's transit encryption for mesh chunks is unique – no other filesystem encrypts data in flight as a filesystem concern. At-rest is planned.
Scalability & Limits
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Max Volume | 64 ZB (u64 @ 4KB) | 1 EiB | 16 EiB | 16 TB | 8 EiB | 1 EiB | 1 EiB+ | 256 ZiB (theoretical) |
| Max File | 64 ZB (u64 block_count) | 16 TiB | 16 EiB | 3.94 TiB | 8 EiB | 1 EiB | 1 EiB+ | 16 EiB |
| Max Files | 4B per volume (CAS CIDs across mesh) | 4 billion | 2^64 | ~3.7M/section | 2^64 | 2^64 | 2^64 | 2^48 |
| RAID | Bucket parity (planned) | No (mdadm) | RAID 0/1/10/5/6 | No | No (mdadm) | Mirroring | RAID 0/1/10/5/6 | RAID-Z1/Z2/Z3, mirror |
The Homenode Reality Check:
A single Homenode runs 4×8 TB NVMe + 10×32 TB SAS = 352 TB. A Chapter of 1,000 Homenodes aggregates 352 PB. Planetwide at millions of nodes over a 15-year horizon reaches exabytes to zettabytes – and storage density doubles roughly every 2–3 years.
NexFS v4's u64 BlockAddr at 4 KB blocks gives 64 ZB per volume. That exceeds ext4 (1 EiB), Btrfs (16 EiB), XFS (8 EiB), HAMMER2 (1 EiB), and bcachefs (1 EiB+). Only ZFS's theoretical 256 ZiB ceiling is higher – but ZFS's practical deployments rarely exceed petabyte scale due to RAM demands (~1 GB base + ARC).
Verdict: NexFS rivals or exceeds every filesystem on this list for single-volume addressable capacity. ZFS leads on multi-device RAID pooling; NexFS leads on range (MCU to Homenode), mesh aggregation, and overhead efficiency. F2FS at 16 TB max volume can't even hold a single modern NVMe drive.
Network & Distribution
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Native Replication | UTCP protocol – BLOCK_WANT/PUT, DAG_SYNC | No | btrfs send/receive (manual) | No | No | Cluster sync (multi-master) | Planned | zfs send/receive (manual) |
| Mesh / P2P | Yes – gossip, peer discovery, credits | No | No | No | No | LAN clustering | No | No |
| Delta Sync | CAS chunk diff (~5% for updates) | No | Incremental send | No | No | Yes (content-hash) | No | Incremental send |
| Incentive Layer | Kinetic Credits economy | No | No | No | No | No | No | No |
Verdict: This is where NexFS is in a category of its own. No other filesystem has native peer-to-peer mesh distribution with an economic incentive layer. HAMMER2 has clustering; ZFS/Btrfs have send/receive. None have content-defined delta sync built into the storage layer. A Chapter of 1,000 Homenodes at 352 PB aggregate storage replicates and deduplicates at the filesystem level – no external tooling required.
Resource Footprint
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Min RAM | ~8.5 KB | ~8 MB | ~64 MB | ~32 MB | ~16 MB | ~64 MB | ~64 MB | 768 MB+ (1 GB recommended) |
| On-Disk Overhead | Superblock 256B, BAM ~0.4%/TB | Superblock 1 KB | Superblock 64 KB | Superblock 4 KB+ | Superblock 512B+ | Superblock varies | Superblock varies | Uberblock 128×1 KB |
| Binary Size | 40–200 KB (profile) | Kernel module (~300 KB) | Kernel module (~800 KB) | Kernel module (~400 KB) | Kernel module (~600 KB) | Kernel module (~500 KB) | Kernel module (~1 MB) | Kernel module (~2 MB+) |
| Allocator | None – caller-provided buffers | Kernel slab | Kernel slab | Kernel slab | Kernel slab | Kernel slab | Kernel slab | ARC + slab (RAM-hungry) |
Verdict: NexFS is three orders of magnitude lighter than anything else at minimum spec – 8.5 KB RAM, zero allocator, 40 KB binary – yet addresses 64 ZB. No other filesystem on this list spans that range. ZFS at the other extreme needs 768 MB+ just to mount; NexFS scales from MCU to Homenode without changing a line of code.
Maturity & Ecosystem
| NexFS | ext4 | Btrfs | F2FS | XFS | HAMMER2 | bcachefs | ZFS | |
|---|---|---|---|---|---|---|---|---|
| Age | 2024 (format v5, v1.3.0) | 2008 (18 years) | 2009 (17 years) | 2012 (14 years) | 1993 (33 years) | 2017 (9 years) | 2015/2024 mainline | 2005 (21 years) |
| Kernel Support | Userland (C-FFI) | Linux native | Linux native | Linux native | Linux, IRIX | DragonFlyBSD only | Linux 6.7+ | Linux (DKMS), FreeBSD, illumos |
| POSIX | No (intentional) | Full | Full | Full | Full | Full | Full | Full |
| Production Use | Pre-production (375+ tests) | Billions of devices | SUSE, Facebook | Samsung, Google (Android) | RHEL default, SGI | DragonFlyBSD default | Early adopters | Enterprise (Oracle, Proxmox, TrueNAS) |
Verdict: This is the honest gap. NexFS is pre-production with zero fleet deployments. ext4 has 18 years of battle scars. ZFS has enterprise pedigree. Architectural superiority doesn't substitute for operational miles. No POSIX compliance is intentional – NexFS exposes a graph API, not a tree API – but it means no drop-in replacement for existing stacks.
Summary Matrix – Where Each Wins
| Dimension | Winner | Runner-up |
|---|---|---|
| Data Integrity | ZFS | NexFS, bcachefs |
| Superblock Resilience | NexFS (scattered replicas) | ZFS (uberblock ring) |
| Content Addressing | NexFS | HAMMER2 |
| Deduplication | NexFS (inline CDC) | HAMMER2 |
| Snapshots | Tie: ZFS, Btrfs, NexFS | bcachefs |
| Compression | NexFS (ZSTD + per-node levels + double checksum) | ZFS, bcachefs |
| Encryption | bcachefs | ZFS |
| Raw Flash | NexFS | F2FS (FTL-aware) |
| Network / Mesh | NexFS (only contender) | HAMMER2 (LAN cluster) |
| Resource Efficiency | NexFS (8.5 KB RAM) | F2FS |
| Volume Capacity | ZFS (256 ZiB theoretical) | NexFS (64 ZB) |
| Operational Range | NexFS (MCU → Homenode → mesh) | — |
| Throughput (large files) | XFS | ZFS |
| Maturity / Trust | ext4, ZFS | XFS |
| RAID | ZFS (RAID-Z) | Btrfs, bcachefs |
| General Purpose | ext4 | XFS |
The Honest Take
NexFS v4 addresses 64 ZB per volume – more than ext4, XFS, Btrfs, HAMMER2, or bcachefs. It runs on 8.5 KB RAM where those filesystems can't even load. It replicates across a mesh where those filesystems need external tools. The operational range – MCU to 352 TB Homenode to 352 PB Chapter to planetwide mesh – is unmatched by any single filesystem.
Where NexFS leads outright:
- Flash-native without FTL – the only filesystem that operates directly on raw NAND/NOR/SPI
- Content-addressed mesh distribution – no other filesystem is its own BitTorrent with an incentive economy
- Operational range – 8.5 KB RAM / 40 KB binary to 64 ZB volumes, one codebase
- Cryptographic provenance – Merkle DAG snapshots are verifiable commitments, not CoW pointers
- Zero-trust by construction – every byte hash-verified, every chunk content-addressed
- Future-proof addressing – u64 BlockAddr at 4 KB blocks covers 64 ZB, enough for 15+ years of storage density growth
Where NexFS has honest gaps:
- Maturity – pre-production with 375+ tests and runtime feature flags (v1.3.0), but zero fleet deployments. ext4 has 18 years; ZFS has enterprise pedigree. v1.3.0's
tune2fs-style runtime tuning and scattered superblock replicas close the ops-readiness gap Compression– Closed in v1.1.0. ZSTD 1–22 with per-node level selection and double-checksum integrity- At-rest encryption – planned (XChaCha20-Poly1305) but not shipped. bcachefs and ZFS have it now
- RAID – single-volume only. Bucket parity planned, but ZFS RAID-Z is battle-tested
- POSIX – intentionally absent. Graph API, not tree API. No drop-in replacement for existing stacks
- Tooling – built-in scrub and runtime
nexfs_tune(), but no equivalent of full e2fsprogs or zdb suite
NexFS is honest about what it is: a v4 filesystem with genuine innovation across a range no single competitor covers. The architecture is there. The scale is there. The miles are next.