Making Pi-hole Highly Available (and Fixing a Hidden Frigate Disk Trap)

A single Pi-hole reboot used to take my whole network’s DNS/DHCP offline. Here’s how I added a secondary DNS Pi-hole on a separate host, kept it in sync, and even caught a sneaky Frigate storage issue along the way.

Pi-hole High Availability hero image: secondary DNS with synced configuration.

For the longest time, my home network had a single point of failure: Pi-hole. It handled DNS and DHCP, which is awesome… until it’s time to reboot it. Every maintenance window turned into “why is the internet broken?” for everyone in the house.

This was on my backlog as GitHub Issue #1506 since March 25, 2025. It finally got crossed off on January 27, 2026—and honestly, having the Codex assistant help drive it from idea to production made the difference.

The problem: one Pi-hole = one outage

If your Pi-hole is the only DNS server your clients know about, a reboot means name resolution fails. And if you also run DHCP on it, it can feel like “the whole network is down” even if your ISP connection is fine.

The fix: a second Pi-hole (DNS-only) on separate hardware

I kept Pi-hole as the DHCP server (because it’s convenient for local name + reservation management), but added a secondary Pi-hole that is DNS-only on a different host. That physical separation is important: it means maintenance or failure on the primary box doesn’t automatically take out your fallback.

  • Primary: DHCP + DNS (the “source of truth”)
  • Secondary: DNS-only (backup resolver)
  • Client behavior: DHCP advertises both DNS servers (primary first, secondary second). During a reboot, clients automatically fall back to the secondary.

This is “active/passive” in the practical home-network sense: the primary stays the preferred server, but the secondary is always ready and online.

Keeping blocklists and config in sync

The secondary Pi-hole is only useful if it behaves like the primary. To solve that, I added an automated sync (using nebula-sync) to keep important Pi-hole settings and lists aligned. The goal is simple: if I add a client group, local DNS entry, or tweak a blocklist, the backup should match without a manual checklist.

Bonus win: we found a Frigate storage problem before it became a bigger outage

While validating services, we noticed the camera/NVR host was reporting critically low disk space. The interesting part: the NAS NFS share had plenty of room, but the local root disk was full.

The root cause was a classic mount-point trap: if the NFS mount isn’t present at boot (or temporarily drops), applications can start writing into the mount directory on the local disk. Later, when NFS mounts again, those local files become “invisible” (covered by the mount), but they still consume space.

  • We confirmed the hidden recordings were old (spanning May 2024 through April 2025).
  • We deleted the stale hidden data, immediately freeing hundreds of GB on the local disk.
  • We hardened the mount using a systemd automount approach so the storage fails closed instead of silently writing to local disk when the NAS share isn’t available.

Result

Now, rebooting Pi-hole no longer feels like rebooting the network. DNS stays up thanks to the secondary resolver, configuration stays consistent via sync, and the camera host won’t quietly fill its local disk if a mount goes sideways.

One GitHub issue that sat for months is now closed—and the network is that much more robust.

Home Assistant fills the HA gap

This is the last piece Pi-hole doesn’t do for you: keeping admin toggles consistent across an active/passive pair. I tracked it in GitHub Issue #1558 and closed it once this automation was in production.

GitHub Issue: #1558 – Home assistant control over Pi-Hole

Home Assistant package: config/packages/pihole_ha.yaml

automation:
  - alias: "Pi-hole HA Sync"
    trigger:
      - platform: state
        entity_id: switch.pi_hole
      - platform: state
        entity_id: switch.pi_hole_2
    action:
      - service: switch.turn_{{ trigger.to_state.state }}
        target:
          entity_id: >
            {{ 'switch.pi_hole_2' if trigger.entity_id == 'switch.pi_hole' else 'switch.pi_hole' }}
TAGS