Adding a disk to single disk zfs boot pool

In this post I would like to document my successful migration from a single disk zfs boot pool to a 2 disc mirror. When migrating my old Truenas system to a vanilly FreeBSD based NAS with jails I had installed the root disk on a single ssd. Last week I had a strange pool error which I had to “recover” from via a power off / power on cycle. Since I migrated from spinning disk to NVME I now have lot of space in my enclosure and I thought it might be a good idea to go from a single disc to a mirror.

The flow

  • put i an additional device
  • copy partition table
  • attach partition to pool
  • copy bootloader
  • test
  • replace old boot disc in pool

Get current status

#zpool status croot
  pool: croot
 state: ONLINE
  scan: scrub repaired 0B in 00:00:38 with 0 errors on Tue Dec  2 02:00:38 2025
config:

        NAME        STATE     READ WRITE CKSUM
        croot       ONLINE       0     0     0
          ada0p4    ONLINE       0     0     0

#gpart show ada0
=>       40  123091840  ada0  GPT  (59G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  118362112     4  freebsd-zfs  (56G)
  123090944        936        - free -  (468K)

#gpart show ada1
=>       40  234441568  ada1  GPT  (112G)
         40         88        - free -  (44K)
        128  234441480     1  freebsd-zfs  (112G)

Copy partition table

#/sbin/gpart backup ada0 | /sbin/gpart restore -F ada1

#gpart show ada1
=>       34  234441581  ada1  GPT  (112G)
         34          6        - free -  (3.0K)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  118362112     4  freebsd-zfs  (56G)
  123090944  111350671        - free -  (53G)

Verify googled commandline, attached a partition , go for it and monitor status

#zpool attach
missing pool name argument
usage:
        attach [-fsw] [-o property=value] <pool> <device> <new-device>

#zpool attach croot ada0p4 ada1p4

#zpool status croot
  pool: croot
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Dec 20 16:07:03 2025
        10.9G / 10.9G scanned, 21.5M / 10.9G issued at 2.15M/s
        7.84M resilvered, 0.19% done, no estimated completion time
config:

        NAME        STATE     READ WRITE CKSUM
        croot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p4  ONLINE       0     0     0
            ada1p4  ONLINE       0     0     0  (resilvering)

# zpool status croot
  pool: croot
 state: ONLINE
  scan: resilvered 11.5G in 00:12:40 with 0 errors on Sat Dec 20 16:19:43 2025
config:

        NAME        STATE     READ WRITE CKSUM
        croot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p4  ONLINE       0     0     0
            ada1p4  ONLINE       0     0     0

Copy bootloader

#mkdir /mnt/efi1
#newfs_msdos /dev/ada1p1
#mount -t msdosfs /dev/ada1p1 /mnt/efi1
#mkdir -p /mnt/efi1/EFI/BOOT
#cp /boot/loader.efi /mnt/efi1/EFI/BOOT/BOOTX64.EFI

Unfortunatly the only way to test this is to physically remove the first disk. Make sure that there is no efiboot entry in your /etc/fstab

After this is I replaced the original ada0 since some strange error was the root cause of this exercise.

check status, Copy partition table to the replaced drive, replace drive, monitor status and copy bootloader

#zpool status
  pool: croot
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: resilvered 3.02M in 00:00:00 with 0 errors on Mon Dec 22 10:44:41 2025
config:

	NAME        STATE     READ WRITE CKSUM
	croot       DEGRADED     0     0     0
	  mirror-0  DEGRADED     0     0     0
	    ada0p4  FAULTED      0     0     0  corrupted data
	    ada1p4  ONLINE       0     0     0

#/sbin/gpart backup ada1 | /sbin/gpart restore -F ada0
#zpool replace croot ada0p4

#zpool status
  pool: croot
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Mon Dec 22 11:01:23 2025
	11.0G / 11.0G scanned, 5.12G / 11.0G issued at 138M/s
	5.40G resilvered, 46.52% done, 00:00:43 to go
config:

	NAME              STATE     READ WRITE CKSUM
	croot             DEGRADED     0     0     0
	  mirror-0        DEGRADED     0     0     0
	    replacing-0   DEGRADED     0     0     0
	      ada0p4/old  FAULTED      0     0     0  corrupted data
	      ada0p4      ONLINE       0     0     0  (resilvering)
	    ada1p4        ONLINE       0     0     0

#zpool status croot
  pool: croot
 state: ONLINE
  scan: resilvered 11.6G in 00:01:34 with 0 errors on Mon Dec 22 11:02:57 2025
config:

	NAME        STATE     READ WRITE CKSUM
	croot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    ada0p4  ONLINE       0     0     0
	    ada1p4  ONLINE       0     0     0

#mkdir /mnt/efi0
#newfs_msdos /dev/ada0p1
#mount -t msdosfs /dev/ada0p1 /mnt/efi0
#mkdir -p /mnt/efi0/EFI/BOOT
#cp /boot/loader.efi /mnt/efi0/EFI/BOOT/BOOTX64.EFI