Skip to content

NexFS vs. The Field

Comprehensive Comparison

NexFS is a clean-sheet filesystem that covers a range no single competitor matches – from a microcontroller with 8.5 KB RAM to a 352 TB Homenode to planetwide sovereign mesh. Since v4's u64 address widening, a single NexFS volume addresses 64 ZB – exceeding ext4, XFS, Btrfs, and HAMMER2 in raw addressable capacity while remaining the only filesystem that runs on bare metal without an OS.

This page is a thorough, honest comparison across every axis that matters.

Lineage & Design Philosophy

FilesystemLineagePrimary TargetDesign Philosophy
NexFSClean-sheet (2024), ZigMCU → Homenode → sovereign meshContent-addressed, flash-native, zero-trust, zero-dependency, u64 scale
ext4ext2→ext3→ext4 (2008), CGeneral-purpose block devicesConservative evolution; backward-compatible; "good enough"
BtrfsOracle (2009), CLinux desktop/serverCoW B-tree; feature-rich; Linux-native ZFS alternative
F2FSSamsung (2012), CFTL-based flash (eMMC, SSD)Log-structured; FTL-aware; mobile-optimised
XFSSGI (1993→Linux 2001), CLarge files, high throughputAllocation groups; parallel I/O; metadata journaling
HAMMER2DragonFlyBSD (2017), CDragonFlyBSD clustersCoW B-tree; content-hash dedup; multi-volume clustering
bcachefsKent Overstreet (2015→mainline 2024), CLinux next-genbcache lineage; CoW B-tree; "XFS + Btrfs done right"
ZFSSun (2005→OpenZFS), CEnterprise storagePooled storage; end-to-end checksums; industrial-grade CoW

Storage Model

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
AllocationBAM + wear-leveled blocksBitmap + extentsCoW B-treeLog-structured segmentsB+tree + allocation groupsCoW B-tree + freemapCoW B-tree + bucketsSlab + metaslab
Copy-on-WriteYes (deep clone)NoYes (native)No (in-place updates)No (reflink via XFS 5.1+)Yes (native)Yes (native)Yes (native)
Block Size512B–64KB (configurable)1–4KB4KB–64KB (nodesize)4KB512B–64KB64KB default512B–64KB128KB default (recordsize)
Extent-basedYes (inline + overflow)Yes (since ext4)Yes (via B-tree)Yes (multi-level index)Yes (B+tree extents)Yes (via B-tree)Yes (via B-tree)Yes (indirect block pointer tree)
Flash-AwareNative – raw NAND/NOR/SPINo – relies on FTLNo – relies on FTLFTL-aware – respects erase boundariesNoNoNoNo

Verdict: NexFS and F2FS are the only two designed with flash in mind – but they solve fundamentally different problems. F2FS works with the FTL; NexFS replaces it. On block devices, NexFS's BAM allocator and CoW model work identically – flash-awareness is additive, not exclusive.


Integrity & Checksumming

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
Data ChecksumsBLAKE3-256 (CAS inherent)No (metadata only)CRC32C (default)No (metadata CRC32 only)CRC32C (metadata v5)SHA256/XXH64CRC32C/XXH64/POLY1305Fletcher-4 / SHA-256
Metadata ChecksumsBLAKE3-256/128 or XXH3-64CRC32C (since 2012)CRC32CCRC32CRC32CSHA256/XXH64CRC32C/XXH64Fletcher-4 / SHA-256
Self-HealingDual + scattered SB fallback (up to 10 replicas)NoYes (with RAID)NoNoYes (with mirrors)Yes (with replication)Yes (with redundancy)
Scrub SupportYes (full-volume sweep)NoYesNoNo (xfs_scrub in 5.x+)YesYesYes
End-to-End VerificationYes – every read verifiedNoPartial (CoW path only)NoNoYesYesYes

Verdict: NexFS, ZFS, bcachefs, and HAMMER2 are the only ones with true end-to-end data integrity. ext4 and F2FS are shockingly bare here – silent corruption is undetectable. NexFS uses BLAKE3 – cryptographically strong, not just error-detection CRC.


Deduplication & Content Addressing

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
Dedup TypeInline CAS (write-time)NoneOffline (slow, RAM-hungry)NoneNone (reflink only)Inline (content-hash)Background (planned)Inline or offline (RAM-hungry)
Content-AddressedYes – CID = BLAKE3(chunk)NoNoNoNoYes – content-hashNoNo (dedup uses DDT, not CAS)
Content-Defined ChunkingFastCDC (16–64–256 KB)NoNoNoNo64KB fixedNoFixed recordsize
Dedup Efficiency~90–95% for similar dataN/A~50–80% (post-process)N/AN/AGood (fixed chunk)TBDGood but RAM-expensive (~320B/block)

Verdict: NexFS and HAMMER2 are the only two with native content-addressed inline dedup. ZFS dedup exists but is infamously RAM-hungry (5+ GB per TB). Btrfs dedup is a painful afterthought. NexFS's FastCDC variable chunking is strictly superior to HAMMER2's fixed 64KB chunks for delta efficiency. On a 352 TB Homenode, inline CAS dedup at write-time means storage efficiency scales without operational overhead.


Snapshots & Versioning

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
SnapshotsTimeWarp – Merkle DAG, 128B costNo (LVM needed)Yes (CoW subvolumes)No (checkpoint only)No (LVM needed)Yes (CoW PFS)Yes (CoW snapshots)Yes (CoW datasets)
Snapshot Cost~128 bytes (DAG node)N/ACheap (CoW)N/AN/ACheap (CoW)Cheap (CoW)Cheap (CoW)
Atomic RollbackYes – single Root Register writeNoYes (send/receive)NoNoYesYesYes (rollback)
Snapshot DiffNative DAG diffN/Abtrfs send --no-dataN/AN/Ahammer2 diffPlannedzfs diff
Chained HistoryYes – snapshot→parent CID chainN/AFlat (independent)N/AN/AYesFlatFlat

Verdict: All CoW filesystems do snapshots cheaply. NexFS's Merkle DAG approach gives it cryptographic provenance and chain-of-custody that the others lack – a snapshot isn't just a point-in-time; it's a verifiable commitment.


Compression

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
AlgorithmsZSTD (1–22) + RLENoneZLIB, LZO, ZSTDLZ4, ZSTD (Android 12+)NoneLZ4, ZLIBLZ4, GZIP, ZSTDLZ4, GZIP, ZSTD, LZ4HC
GranularityPer-DAG-node + per-CAS-chunkN/APer-file / per-subvolumePer-fileN/APer-PFSPer-file / per-subvolumePer-dataset
Level SelectionPer-node and per-bucketN/APer-subvolumeN/AN/AN/AN/APer-dataset
IntegrityDouble checksum (XXH3 + BLAKE3)N/ACRC32C on CoWN/AN/ASHA256CRC32CFletcher-4

Verdict: NexFS v1.1.0 closes the compression gap. ZSTD levels 1–22 via vendored libzstd, with four granularity modes (per-bucket, per-DAG-node, per-CAS-chunk, per-DAG+chunk). Per-node level selection means ZSTD:2 on hot system paths and ZSTD:18 on cold archives – on the same volume. No other filesystem offers per-file compression level control at the storage layer. Double-checksum integrity (XXH3-64 pre-decompression gate piped during streaming compress + BLAKE3 post-verify) catches bit flips before ZSTD touches the data. RLE retained for Core profile where ZSTD's frame overhead is counterproductive on 512-byte blocks and radiation-induced bit flips can destroy entire compressed frames.


Encryption

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
At-RestPlanned (XChaCha20-Poly1305)fscrypt (per-file)No (dm-crypt only)fscrypt (per-file)No (dm-crypt only)NoChacha20/AES-256 (native)Native (since OpenZFS 2.0)
Per-FilePlannedYes (fscrypt)NoYes (fscrypt)NoNoYesNo (per-dataset)
TransitMonolith-derived keys (mesh)N/AN/AN/AN/AN/AN/AN/A

Verdict: bcachefs and ZFS have native encryption. ext4/F2FS leverage fscrypt. NexFS's transit encryption for mesh chunks is unique – no other filesystem encrypts data in flight as a filesystem concern. At-rest is planned.


Scalability & Limits

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
Max Volume64 ZB (u64 @ 4KB)1 EiB16 EiB16 TB8 EiB1 EiB1 EiB+256 ZiB (theoretical)
Max File64 ZB (u64 block_count)16 TiB16 EiB3.94 TiB8 EiB1 EiB1 EiB+16 EiB
Max Files4B per volume (CAS CIDs across mesh)4 billion2^64~3.7M/section2^642^642^642^48
RAIDBucket parity (planned)No (mdadm)RAID 0/1/10/5/6NoNo (mdadm)MirroringRAID 0/1/10/5/6RAID-Z1/Z2/Z3, mirror

The Homenode Reality Check:

A single Homenode runs 4×8 TB NVMe + 10×32 TB SAS = 352 TB. A Chapter of 1,000 Homenodes aggregates 352 PB. Planetwide at millions of nodes over a 15-year horizon reaches exabytes to zettabytes – and storage density doubles roughly every 2–3 years.

NexFS v4's u64 BlockAddr at 4 KB blocks gives 64 ZB per volume. That exceeds ext4 (1 EiB), Btrfs (16 EiB), XFS (8 EiB), HAMMER2 (1 EiB), and bcachefs (1 EiB+). Only ZFS's theoretical 256 ZiB ceiling is higher – but ZFS's practical deployments rarely exceed petabyte scale due to RAM demands (~1 GB base + ARC).

Verdict: NexFS rivals or exceeds every filesystem on this list for single-volume addressable capacity. ZFS leads on multi-device RAID pooling; NexFS leads on range (MCU to Homenode), mesh aggregation, and overhead efficiency. F2FS at 16 TB max volume can't even hold a single modern NVMe drive.


Network & Distribution

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
Native ReplicationUTCP protocol – BLOCK_WANT/PUT, DAG_SYNCNobtrfs send/receive (manual)NoNoCluster sync (multi-master)Plannedzfs send/receive (manual)
Mesh / P2PYes – gossip, peer discovery, creditsNoNoNoNoLAN clusteringNoNo
Delta SyncCAS chunk diff (~5% for updates)NoIncremental sendNoNoYes (content-hash)NoIncremental send
Incentive LayerKinetic Credits economyNoNoNoNoNoNoNo

Verdict: This is where NexFS is in a category of its own. No other filesystem has native peer-to-peer mesh distribution with an economic incentive layer. HAMMER2 has clustering; ZFS/Btrfs have send/receive. None have content-defined delta sync built into the storage layer. A Chapter of 1,000 Homenodes at 352 PB aggregate storage replicates and deduplicates at the filesystem level – no external tooling required.


Resource Footprint

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
Min RAM~8.5 KB~8 MB~64 MB~32 MB~16 MB~64 MB~64 MB768 MB+ (1 GB recommended)
On-Disk OverheadSuperblock 256B, BAM ~0.4%/TBSuperblock 1 KBSuperblock 64 KBSuperblock 4 KB+Superblock 512B+Superblock variesSuperblock variesUberblock 128×1 KB
Binary Size40–200 KB (profile)Kernel module (~300 KB)Kernel module (~800 KB)Kernel module (~400 KB)Kernel module (~600 KB)Kernel module (~500 KB)Kernel module (~1 MB)Kernel module (~2 MB+)
AllocatorNone – caller-provided buffersKernel slabKernel slabKernel slabKernel slabKernel slabKernel slabARC + slab (RAM-hungry)

Verdict: NexFS is three orders of magnitude lighter than anything else at minimum spec – 8.5 KB RAM, zero allocator, 40 KB binary – yet addresses 64 ZB. No other filesystem on this list spans that range. ZFS at the other extreme needs 768 MB+ just to mount; NexFS scales from MCU to Homenode without changing a line of code.


Maturity & Ecosystem

NexFSext4BtrfsF2FSXFSHAMMER2bcachefsZFS
Age2024 (format v5, v1.3.0)2008 (18 years)2009 (17 years)2012 (14 years)1993 (33 years)2017 (9 years)2015/2024 mainline2005 (21 years)
Kernel SupportUserland (C-FFI)Linux nativeLinux nativeLinux nativeLinux, IRIXDragonFlyBSD onlyLinux 6.7+Linux (DKMS), FreeBSD, illumos
POSIXNo (intentional)FullFullFullFullFullFullFull
Production UsePre-production (375+ tests)Billions of devicesSUSE, FacebookSamsung, Google (Android)RHEL default, SGIDragonFlyBSD defaultEarly adoptersEnterprise (Oracle, Proxmox, TrueNAS)

Verdict: This is the honest gap. NexFS is pre-production with zero fleet deployments. ext4 has 18 years of battle scars. ZFS has enterprise pedigree. Architectural superiority doesn't substitute for operational miles. No POSIX compliance is intentional – NexFS exposes a graph API, not a tree API – but it means no drop-in replacement for existing stacks.


Summary Matrix – Where Each Wins

DimensionWinnerRunner-up
Data IntegrityZFSNexFS, bcachefs
Superblock ResilienceNexFS (scattered replicas)ZFS (uberblock ring)
Content AddressingNexFSHAMMER2
DeduplicationNexFS (inline CDC)HAMMER2
SnapshotsTie: ZFS, Btrfs, NexFSbcachefs
CompressionNexFS (ZSTD + per-node levels + double checksum)ZFS, bcachefs
EncryptionbcachefsZFS
Raw FlashNexFSF2FS (FTL-aware)
Network / MeshNexFS (only contender)HAMMER2 (LAN cluster)
Resource EfficiencyNexFS (8.5 KB RAM)F2FS
Volume CapacityZFS (256 ZiB theoretical)NexFS (64 ZB)
Operational RangeNexFS (MCU → Homenode → mesh)
Throughput (large files)XFSZFS
Maturity / Trustext4, ZFSXFS
RAIDZFS (RAID-Z)Btrfs, bcachefs
General Purposeext4XFS

The Honest Take

NexFS v4 addresses 64 ZB per volume – more than ext4, XFS, Btrfs, HAMMER2, or bcachefs. It runs on 8.5 KB RAM where those filesystems can't even load. It replicates across a mesh where those filesystems need external tools. The operational range – MCU to 352 TB Homenode to 352 PB Chapter to planetwide mesh – is unmatched by any single filesystem.

Where NexFS leads outright:

  1. Flash-native without FTL – the only filesystem that operates directly on raw NAND/NOR/SPI
  2. Content-addressed mesh distribution – no other filesystem is its own BitTorrent with an incentive economy
  3. Operational range – 8.5 KB RAM / 40 KB binary to 64 ZB volumes, one codebase
  4. Cryptographic provenance – Merkle DAG snapshots are verifiable commitments, not CoW pointers
  5. Zero-trust by construction – every byte hash-verified, every chunk content-addressed
  6. Future-proof addressing – u64 BlockAddr at 4 KB blocks covers 64 ZB, enough for 15+ years of storage density growth

Where NexFS has honest gaps:

  1. Maturity – pre-production with 375+ tests and runtime feature flags (v1.3.0), but zero fleet deployments. ext4 has 18 years; ZFS has enterprise pedigree. v1.3.0's tune2fs-style runtime tuning and scattered superblock replicas close the ops-readiness gap
  2. CompressionClosed in v1.1.0. ZSTD 1–22 with per-node level selection and double-checksum integrity
  3. At-rest encryption – planned (XChaCha20-Poly1305) but not shipped. bcachefs and ZFS have it now
  4. RAID – single-volume only. Bucket parity planned, but ZFS RAID-Z is battle-tested
  5. POSIX – intentionally absent. Graph API, not tree API. No drop-in replacement for existing stacks
  6. Tooling – built-in scrub and runtime nexfs_tune(), but no equivalent of full e2fsprogs or zdb suite

NexFS is honest about what it is: a v4 filesystem with genuine innovation across a range no single competitor covers. The architecture is there. The scale is there. The miles are next.