```html
An immutable appliance. ZFS-native. GitOps-driven. Deploy your entire NAS stack from a state.yaml and version-control everything.
Traditional NAS operating systems rely on manual configuration (colloquially known as "ClickOps") and brittle package managers. When you click a button in their UI, a backend script modifies an underlying system file and updates a local database. This state is fragile. If the boot drive dies, your configuration dies with it. If a package manager fails mid-update, the system is left in an unbootable, indeterminate state. DPlaneOS was built to fix this by bringing hyperscale engineering (immutability, GitOps, and mathematical consensus) to edge environments and homelabs, packaged in a UI that anyone can use.
apt, dpkg) leave remnant files. Major version upgrades frequently fail or cause kernel panics.state.yaml file. The truth lives in the repository, not the runtime. If the OS dies, your cluster redeploys itself from Git.There is no "Save" button that immediately runs bash scripts in the background. Instead, the UI modifies a structured state.yaml. The daemon constantly diffs this desired state against the physical reality of the Linux kernel, ZFS, and Docker, forcing the system to match your declaration.
DPlaneOS enforces a strict declarative model. Every ZFS pool, SMB share, user permission, and Docker Compose stack is written to a single YAML file. The UI is just a friendly steering wheel for Git.
The reconciler guarantees idempotency. Applying the exact same state twice produces zero operations. Formal mathematical proofs testing this behavior are baked directly into the binary's CI/CD pipeline.
Pools and disks are tracked and imported via immutable GUIDs, not fragile string names like /dev/sda. The engine utilizes ambiguity detection to automatically block reconciliation if a degenerate state is found.
DPlaneOS is built on a strict 3-tier architecture. It explicitly decouples the mutable control plane from the immutable operating system, ensuring OS rollbacks never destroy your configuration data.
The top layer is fully dynamic. It houses the Go Daemon, the React UI, the GitOps state.yaml, and the Patroni/etcd consensus ring. When you interact with DPlaneOS via the API or Web UI, you are mutating the state within this layer. It is the brain that instructs the layers below.
The middle layer is the operating system itself. DPlaneOS utilizes NixOS to build a mathematically reproducible environment. The Go Daemon passes a pure JSON file down to this layer. NixOS evaluates the JSON, builds a completely new, read-only system generation, and swaps to it. If the build fails, or the network drops, the system watchdog instantly reverts to the previous generation.
The bottom layer is where your actual data lives. This includes your ZFS storage pools, Docker container volumes, and the PostgreSQL database that holds the cryptographic audit logs. Because this layer is strictly decoupled from Tier 2, an OS rollback or a failed NixOS build will never touch or corrupt your persistent storage.
Understand the failover logic and consensus mechanisms that protect your data.
.nix file, bricking the rebuild process. builtins.fromJSON. There is zero dynamic Nix syntax generated at runtime, completely eliminating the risk of syntax injection crashes.
etcd provides a strict 3-node distributed key-value store (often utilizing a lightweight third witness node like a Raspberry Pi 5 to maintain odd-number quorum). Patroni sits on top of etcd to manage highly available PostgreSQL leader election, ensuring the internal DPlaneOS database is always consistent across the nodes. Finally, Keepalived handles the floating Virtual IP, routing your web traffic to whichever node currently holds the primary Patroni lock.
crypto/rand) to delay the kill commands.
DPlaneOS assumes the network is compromised. It relies on a minimal attack surface, strict allowlisting, and cryptographic auditing to protect your data at the kernel and API levels.
DPlaneOS eliminates shell injection vectors entirely. There is no bash -c used anywhere in the core engine. All backend system calls are passed as isolated arguments to discrete exec.Command processes, strictly verified against a centralized regex allowlist (whitelist.go).
To guarantee physical truth, the compliance engine features a kernel-level forensic probe. It extracts the live firewall state directly from the Linux kernel using nft -j. If a shadow port is manually opened via SSH, the daemon immediately detects the drift from the declarative intent.
A strict Role-Based Access Control engine sits at the API routing layer enforcing 34 granular permissions. The network layer enforces a strict CIDR-based allowlist for trusted proxies, instantly neutralizing header spoofing attacks.
Every single state-mutating API action is logged into PostgreSQL and cryptographically chained using an HMAC-SHA256 hash. These tamper-evident logs ensure you can mathematically prove the historical integrity of your infrastructure to an auditor.
Destructive operations are physically gated. DPlaneOS enforces strict /dev/disk/by-id/ pathing, refusing to operate on ephemeral /dev/sdX labels. Real-time ZFS pool membership checks physically block accidental disk wiping.
Authentication natively supports Active Directory and OpenLDAP binds alongside local bcrypt-hashed accounts. The platform enforces TOTP 2FA (RFC 6238), 32-byte randomized session tokens, and SHA-256 hashed API tokens with strictly scoped operational boundaries.
Built for engineers who want production-grade infrastructure without the massive hardware overhead or complexity tax.
Pools, datasets, RAIDZ, mirrors, encryption, quotas, SMART, and scrub scheduling. Asynchronous replication over 10G DAC, protected by STONITH.
Container management, Compose stacks, one-click template library, ZFS-clone ephemeral sandboxes, and atomic updates with automatic rollback.
SMB with Time Machine support (vfs_fruit), NFS exports, and iSCSI block targets are all configured via UI and reconciled against ZFS datasets.
Interface configuration, bonding, VLANs, routing, and DNS. Written safely to the JSON-to-Nix bridge on the appliance.
Local users, LDAP/AD with JIT provisioning and group-to-role mapping. TOTP 2FA, API tokens, and RBAC with 34 granular permissions.
Full PTY terminal in the browser via xterm.js. Authenticated by your web session. Power users never need to leave the UI for escape hatch operations.
DPlaneOS is a complete Hyper-Converged Appliance. Boot the ISO to own the entire stack, or run the daemon on your existing Debian/Ubuntu server.
| Feature | DPlaneOS | TrueNAS SCALE | OpenMediaVault |
|---|---|---|---|
| Install footprint | 1.9 GB ISO | 2.5 GB | 1.2 GB |
| Core runtime | Go (Compiled) | Python, Middleware | PHP, nginx, Salt |
| Declarative GitOps | check | close | close |
| Active HA Cluster | check | Enterprise Only | close |
| NixOS native | check | close | close |
| Container runtime | Docker Compose | Kubernetes | Docker (plugin) |
| HMAC audit chain | check | close | close |
| License | AGPLv3 | BSL / mixed | AGPLv3 |
DPlaneOS ships as a fully immutable, bootable NixOS appliance. Flash it to a USB drive, boot your server, and follow the TUI installer.
Grab the 1.9GB bootable appliance image for x86_64 architectures.
Use Rufus or BalenaEtcher to boot the appliance and follow the interactive TUI.
Navigate to your server's IP, log in, and start declaring your infrastructure.