```html DPlaneOS: Infrastructure Control Plane
account_tree Infrastructure Control Plane
v7.5.2: Ironclad HA

Your infrastructure,
declared in Git.

An immutable appliance. ZFS-native. GitOps-driven. Deploy your entire NAS stack from a state.yaml and version-control everything.

1.9GB
Immutable ISO
HA
Zero-Touch
35+
UI Pages
250+
API Routes

We got tired of babysitting fragile servers.

Traditional NAS operating systems rely on manual configuration (colloquially known as "ClickOps") and brittle package managers. When you click a button in their UI, a backend script modifies an underlying system file and updates a local database. This state is fragile. If the boot drive dies, your configuration dies with it. If a package manager fails mid-update, the system is left in an unbootable, indeterminate state. DPlaneOS was built to fix this by bringing hyperscale engineering (immutability, GitOps, and mathematical consensus) to edge environments and homelabs, packaged in a UI that anyone can use.

warning

The Old Way

  • close
    Configuration Drift: Manual UI clicks mean the state of your server exists only in the server's local database. Disaster recovery requires completely rebuilding the OS by hand.
  • close
    Brittle Updates: Traditional package managers (apt, dpkg) leave remnant files. Major version upgrades frequently fail or cause kernel panics.
  • close
    Split-Brain HA: Basic high availability setups without an external Quorum or out-of-band STONITH fencing inevitably lead to total ZFS pool destruction when the network partitions.
check_circle

The DPlaneOS Way

  • check
    100% Declarative: Your UI interactions write to a state.yaml file. The truth lives in the repository, not the runtime. If the OS dies, your cluster redeploys itself from Git.
  • check
    A/B Immutable Core: Built on NixOS. Updates are atomic and install to an inactive partition. If the new configuration fails to boot, the system rolls back instantly.
  • check
    Ironclad HA: Distributed etcd consensus, randomized STONITH jitter, and physical PDU execution guarantee safe, mathematically sound automated failovers.

A Kubernetes-style reconcile loop for your NAS.

There is no "Save" button that immediately runs bash scripts in the background. Instead, the UI modifies a structured state.yaml. The daemon constantly diffs this desired state against the physical reality of the Linux kernel, ZFS, and Docker, forcing the system to match your declaration.

edit_document

The Declarative Standard

DPlaneOS enforces a strict declarative model. Every ZFS pool, SMB share, user permission, and Docker Compose stack is written to a single YAML file. The UI is just a friendly steering wheel for Git.

verified

Mathematically Idempotent

The reconciler guarantees idempotency. Applying the exact same state twice produces zero operations. Formal mathematical proofs testing this behavior are baked directly into the binary's CI/CD pipeline.

shield_locked

GUID-Safe Execution

Pools and disks are tracked and imported via immutable GUIDs, not fragile string names like /dev/sda. The engine utilizes ambiguity detection to automatically block reconciliation if a degenerate state is found.

Separation of concerns.

DPlaneOS is built on a strict 3-tier architecture. It explicitly decouples the mutable control plane from the immutable operating system, ensuring OS rollbacks never destroy your configuration data.

settings_input_component

Tier 1: The Control Plane (Mutable)

The top layer is fully dynamic. It houses the Go Daemon, the React UI, the GitOps state.yaml, and the Patroni/etcd consensus ring. When you interact with DPlaneOS via the API or Web UI, you are mutating the state within this layer. It is the brain that instructs the layers below.

Go Binary React SPA Git Repository etcd + Patroni
arrow_downward
view_in_ar

Tier 2: The Immutable Base (Stateless)

The middle layer is the operating system itself. DPlaneOS utilizes NixOS to build a mathematically reproducible environment. The Go Daemon passes a pure JSON file down to this layer. NixOS evaluates the JSON, builds a completely new, read-only system generation, and swaps to it. If the build fails, or the network drops, the system watchdog instantly reverts to the previous generation.

NixOS JSON-to-Nix Bridge systemd Watchdog
arrow_downward
database

Tier 3: The Persistence Layer (Stateful)

The bottom layer is where your actual data lives. This includes your ZFS storage pools, Docker container volumes, and the PostgreSQL database that holds the cryptographic audit logs. Because this layer is strictly decoupled from Tier 2, an OS rollback or a failed NixOS build will never touch or corrupt your persistent storage.

OpenZFS PostgreSQL (Data) Docker Volumes

How the engine actually behaves.

Understand the failover logic and consensus mechanisms that protect your data.

data_object The JSON-to-Nix Bridge (Zero Injection Risk)
expand_more
Generating Nix language syntax dynamically from a Go daemon is inherently dangerous: a misplaced quote in a user-provided SMB share name could yield an un-parseable .nix file, bricking the rebuild process.

DPlaneOS solves this by cleanly separating intent from evaluation. To apply network and system changes, the daemon writes a pure, highly-structured JSON state file. NixOS then reads this JSON natively at evaluation time using builtins.fromJSON. There is zero dynamic Nix syntax generated at runtime, completely eliminating the risk of syntax injection crashes.
hub etcd Consensus & Keepalived
expand_more
In a clustered Active-Standby environment, relying on a single SQLite database for cluster state is impossible. DPlaneOS relies on battle-tested industry standards for distributed consensus.

etcd provides a strict 3-node distributed key-value store (often utilizing a lightweight third witness node like a Raspberry Pi 5 to maintain odd-number quorum). Patroni sits on top of etcd to manage highly available PostgreSQL leader election, ensuring the internal DPlaneOS database is always consistent across the nodes. Finally, Keepalived handles the floating Virtual IP, routing your web traffic to whichever node currently holds the primary Patroni lock.
offline_bolt Ironclad HA (STONITH Jitter)
expand_more
Imagine a network partition: Node A and Node B lose connection to each other. Without STONITH (Shoot The Other Node In The Head), both nodes might assume the other is dead and attempt to import the ZFS pool. This is called "Split-Brain", and it instantly destroys your data.

DPlaneOS prevents this mathematically. First, it queries the HTTP Quorum Witnesses to determine which node is actually isolated. The winning node then issues an out-of-band IPMI or Network PDU command to physically cut the power to the isolated node. To prevent mutual destruction (where both nodes try to kill each other at the exact same millisecond), DPlaneOS utilizes cryptographic random jitter (crypto/rand) to delay the kill commands.
sync_lock Zombie Node Reconciliation
expand_more
When a fenced node finally reboots, it has stale data on its drives. If it blindly rejoins the cluster, it could attempt to serve outdated files or corrupt the database.

DPlaneOS handles this via TXG Boot Checks. On startup, before initializing the heartbeat loop, the daemon compares its local ZFS pool Transaction Group (TXG) against the active leader. If the local pool is stale, the node forcefully enters a read-only Subordinate Mode. It initiates a full ZFS catch-up over SSH, and only lifts the read-only lock once it is mathematically perfectly synced with the leader.

Hostile by design.

DPlaneOS assumes the network is compromised. It relies on a minimal attack surface, strict allowlisting, and cryptographic auditing to protect your data at the kernel and API levels.

terminal

Zero-Shell Execution

DPlaneOS eliminates shell injection vectors entirely. There is no bash -c used anywhere in the core engine. All backend system calls are passed as isolated arguments to discrete exec.Command processes, strictly verified against a centralized regex allowlist (whitelist.go).

wifi_tethering_error

Forensic Firewall Probe

To guarantee physical truth, the compliance engine features a kernel-level forensic probe. It extracts the live firewall state directly from the Linux kernel using nft -j. If a shadow port is manually opened via SSH, the daemon immediately detects the drift from the declarative intent.

admin_panel_settings

Fail-Closed RBAC Middleware

A strict Role-Based Access Control engine sits at the API routing layer enforcing 34 granular permissions. The network layer enforces a strict CIDR-based allowlist for trusted proxies, instantly neutralizing header spoofing attacks.

policy

Cryptographic HMAC Audit Chain

Every single state-mutating API action is logged into PostgreSQL and cryptographically chained using an HMAC-SHA256 hash. These tamper-evident logs ensure you can mathematically prove the historical integrity of your infrastructure to an auditor.

hard_drive

Storage-Level Safety Locks

Destructive operations are physically gated. DPlaneOS enforces strict /dev/disk/by-id/ pathing, refusing to operate on ephemeral /dev/sdX labels. Real-time ZFS pool membership checks physically block accidental disk wiping.

passkey

Identity & Session Engine

Authentication natively supports Active Directory and OpenLDAP binds alongside local bcrypt-hashed accounts. The platform enforces TOTP 2FA (RFC 6238), 32-byte randomized session tokens, and SHA-256 hashed API tokens with strictly scoped operational boundaries.

Everything you need. Nothing you don't.

Built for engineers who want production-grade infrastructure without the massive hardware overhead or complexity tax.

hard_drive

ZFS: First Class

Pools, datasets, RAIDZ, mirrors, encryption, quotas, SMART, and scrub scheduling. Asynchronous replication over 10G DAC, protected by STONITH.

apps

Docker & Compose

Container management, Compose stacks, one-click template library, ZFS-clone ephemeral sandboxes, and atomic updates with automatic rollback.

folder_shared

Shares & Exports

SMB with Time Machine support (vfs_fruit), NFS exports, and iSCSI block targets are all configured via UI and reconciled against ZFS datasets.

router

Declarative Network

Interface configuration, bonding, VLANs, routing, and DNS. Written safely to the JSON-to-Nix bridge on the appliance.

manage_accounts

Enterprise Identity

Local users, LDAP/AD with JIT provisioning and group-to-role mapping. TOTP 2FA, API tokens, and RBAC with 34 granular permissions.

terminal

Embedded Terminal

Full PTY terminal in the browser via xterm.js. Authenticated by your web session. Power users never need to leave the UI for escape hatch operations.

Where we stand.

DPlaneOS is a complete Hyper-Converged Appliance. Boot the ISO to own the entire stack, or run the daemon on your existing Debian/Ubuntu server.

Feature DPlaneOS TrueNAS SCALE OpenMediaVault
Install footprint 1.9 GB ISO 2.5 GB 1.2 GB
Core runtime Go (Compiled) Python, Middleware PHP, nginx, Salt
Declarative GitOps check close close
Active HA Cluster check Enterprise Only close
NixOS native check close close
Container runtime Docker Compose Kubernetes Docker (plugin)
HMAC audit chain check close close
License AGPLv3 BSL / mixed AGPLv3
policyEnterprise

DPlane Compliance Engine

Cryptographic audit chain verification, SOC2 and ISO 27001 evidence PDF generation, and Ed25519 offline licensing. Hand your auditor a mathematically verified proof of infrastructure state.

mail Request License open_in_new Learn More
SOC2 Evidence
Automated PDF reports mapped to physical state
ISO 27001
Tamper-evident chain-of-custody audit trail

From bare metal to cluster in 10 minutes.

DPlaneOS ships as a fully immutable, bootable NixOS appliance. Flash it to a USB drive, boot your server, and follow the TUI installer.

  1. 1

    Download the ISO

    Grab the 1.9GB bootable appliance image for x86_64 architectures.

  2. 2

    Flash & Boot

    Use Rufus or BalenaEtcher to boot the appliance and follow the interactive TUI.

  3. 3

    Configure via Web

    Navigate to your server's IP, log in, and start declaring your infrastructure.

download Download ISO v7.5.2
# 1. Flash to USB (Linux / macOS example) sudo dd if=dplaneos-v7.5.2-x86_64.iso of=/dev/sdX bs=4M status=progress # 2. Boot and run the TUI Installer Welcome to DPlaneOS Appliance Installer > Select Install Drive > Configure Network > Set Admin Password ✓ Formatting A/B partitions... ✓ Copying NixOS closure... # 3. Access the Control Plane curl http://<your-server-ip>/api/system/health {"status":"ok","version":"7.5.2"}
```