Migrating a ZFS Pool to smaller disks
In this document I would like to document my successful migration (completed 2 weeks ago) of a ZFS mirror to 2 smaller disks. I started the server on 2 3TB spinning discs but I dont need the size and wanted to move to 2 2TB NVME drives.
I have done this excercise multiple times on a testsystem before the final move on my production server.
Do NOT just copy and paste.
Starting point
- Old pool: zroot (boot pool)
- New pool: zroot_n
- New hard disks: /dev/nvme0, /dev/nvme1
Step 1: What are the disk names?
geom disk list
Geom name: nda0
Providers:
1. Name: nda0
Mediasize: 2000398934016 (1.8T)
Sectorsize: 512
Mode: r0w0e0
descr: Samsung SSD 990 PRO 2TB
lunid: 0025384551a20176
ident: S7HENU1Y526671E
rotationrate: 0
fwsectors: 0
fwheads: 0
Step 2: Which boot mode is in use (UEFI or BIOS)?
sysctl machdep.bootmethod
machdep.bootmethod: UEFI
Step 3: Prepare/format the new hard drives
gpart destroy -F /dev/nda0
gpart destroy -F /dev/nda1
Step 4: Partitions on the new disks (UEFI)
# Create GPT partitions on both disks
gpart create -s gpt /dev/nda0
gpart create -s gpt /dev/nda1
# Add EFI system partition
gpart add -a 1M -s 200M -t efi -l efi0 /dev/nda0
gpart add -a 1M -s 200M -t efi -l efi1 /dev/nda1
# Format EFI partitions
newfs_msdos /dev/gpt/efi0
newfs_msdos /dev/gpt/efi1
# Create swap
gpart add -a 1m -s 8G -t freebsd-swap -l swap0 /dev/nda0
gpart add -a 1m -s 8G -t freebsd-swap -l swap1 /dev/nda1
# Create the ZFS partition (rest of disk)
gpart add -t freebsd-zfs -l zfs0 /dev/nda0
gpart add -t freebsd-zfs -l zfs1 /dev/nda1
Step 5: Create a new pool
mkdir /altroot
zpool create -f \
-o altroot=/altroot \
-o cachefile=/tmp/zpool.cache \
-O mountpoint=none \
-O atime=off \
-O compression=lz4 \
zroot_n mirror /dev/gpt/zfs0 /dev/gpt/zfs1
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 2.72T 627G 2.11T - - 32% 22% 1.00x ONLINE -
zroot_n 1.81T 564K 1.81T - - 0% 0% 1.00x ONLINE /altroot
Step 6: ZFS replication of data to the new pool
I performed the replication in two passes: a base replication and then another delta replication shortly before the migration.
zfs snapshot -r zroot@2delete-later-full2
zfs send -Rv zroot@2delete-later-full2 | zfs receive -uFv zroot_n
Step 7: Final Replication
On the day of the migration, I disabled the Bastille jail manager. I wanted to ensure that the first jails weren’t already producing data (mail server) when I didn’t yet know if everything had worked.
Disable autoboot bastille in /etc/rc.conf
zfs snapshot -r zroot@2delete-later-full2-1
zfs send -R -I zroot@2delete-later-full2 zroot@2delete-later-full2-1 | zfs receive -Fv zroot_n
Step 8: Set mountpoints
zfs set mountpoint=/ zroot_n/ROOT/default
zpool set bootfs=zroot_n/ROOT/default zroot_n
Step 9: Copy bootloader (UEFI only)
zfs mount -a
mkdir -p /mnt/boot/efi
mount -t msdosfs /dev/gpt/efi0 /mnt/boot/efi
mkdir -p /mnt/boot/efi/EFI/BOOT
cp /boot/loader.efi /mnt/boot/efi/EFI/BOOT/BOOTX64.EFI
umount /mnt/boot/efi
mount -t msdosfs /dev/gpt/efi1 /mnt/boot/efi
mkdir -p /mnt/boot/efi/EFI/BOOT
cp /boot/loader.efi /mnt/boot/efi/EFI/BOOT/BOOTX64.EFI
umount /mnt/boot/efi
Step 10: rename pool
At this point, you must boot from a USB stick.
zpool import zroot zroot_old
zpool export zroot_old
zpool import -f zroot_n zroot
It should work without it, but it’s probably better to deactivate the old disks.
Step 11: Reboot and test
zpool status