Improving the hardware of our systems is always a reason for happiness, but reinstalling the operating system is a laborious task if we want to leave it as we like it. Fortunately, ZFS is capable of expanding a vdev in single mode to mirror mode transparently, so we can migrate zroot to another disk in a few steps without the need to reinstall.
First, we must connect the new disk, in my case it is an external disk connected through a USB enclosure, this generated entries in the dmesg system log:
da1 at umass-sim1 bus 1 scbus7 target 0 lun 0
da1: <sobetter EXT 0204> Fixed Direct Access SPC-4 SCSI device
da1: Serial Number 3605E804107440A
da1: 40.000MB/s transfers
da1: 953869MB (1953525168 512 byte sectors)
da1: quirks=0x2<NO_6_BYTE>
We look for the da1 disk in the camcontrol list:
<WDC WDS120G2G0A-00JH30 UE510000> at scbus0 target 0 lun 0 (ada0,pass0)
<HL-DT-ST DVD+-RW GS20N A106> at scbus1 target 0 lun 0 (cd0,pass1)
<WD 10EZEX External 1.75> at scbus6 target 0 lun 0 (pass2,da0)
<sobetter EXT 0204> at scbus7 target 0 lun 0 (pass3,da1)
We check the current status of the system:
pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:13:16 with 0 errors on Thu May 19 08:13:16 2022
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
errors: No known data errors
Knowing which disk is zroot and which disk is the new one, we backup the partition scheme from one disk to another:
We make sure that the backup has been done correctly:
=> 40 234454960 ada0 GPT (112G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 230256640 3 freebsd-zfs (110G)
234452992 2008 - free - (1.0M)
=> 40 1953525088 da1 GPT (932G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 230256640 3 freebsd-zfs (110G)
234452992 1719072136 - free - (820G)
We recreate the third partition to occupy the additional space since the new disk is larger:
da1p3 deleted
We create a new one taking into account the partition alignment:
da1p3 added
We check the status:
=> 40 1953525088 da1 GPT (932G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 1949327360 3 freebsd-zfs (930G)
1953523712 1416 - free - (708K)
We add the new disk to the vdev-zroot:
We will see that data is being copied from one disk to another:
pool: zroot
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu May 19 11:17:19 2022
11.5G scanned at 420M/s, 176M issued at 6.28M/s, 86.4G total
182M resilvered, 0.20% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
da1p3 ONLINE 0 0 0 (resilvering)
errors: No known data errors
Depending on the type of boot we are using, we will install the bootstrap code in one way or another:
- EFI: We dump the RAW image of an EFI partition onto the disk’s efi partition.
- BIOS: We install the bootstrap code in the MBR of the disk (-b /boot/pmbr) and the contents of the loader (-p /boot/gptzfsboot) in the freebsd-boot partition.
First, we identify the type of installation and then install the bootstrap code:
Identification:
gpart show da1|grep efi
40 409600 1 efi (200M)
Installation, disk da1, partition 1:
gpart bootcode -p /boot/boot1.efifat -i 1 da1
Identification:
gpart show da1|grep freebsd-boot
40 1024 1 freebsd-boot (512K)
Installation, disk da1, partition 1:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1
Identification:
gpart show da1
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
Installation:
disk da1, partition 1 -> EFI
disk da1, partition 2 -> BIOS
gpart bootcode -p /boot/boot1.efifat -i 1 da1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 da1
In my case, it is a BIOS system:
partcode written to da1p1
bootcode written to da1
When the mirror has been duplicated, we will see both disks online:
pool: zroot
state: ONLINE
scan: resilvered 90.1G in 03:20:15 with 0 errors on Thu May 19 14:37:34 2022
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
da1p3 ONLINE 0 0 0
errors: No known data errors
We shut down the system:
We remove the original disk and replace it with the new one.
We boot again.
The status after the disk change is as follows:
pool: zroot
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: resilvered 90.1G in 03:20:15 with 0 errors on Thu May 19 14:37:34 2022
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 FAULTED 0 0 0 corrupted data
ada0p3 ONLINE 0 0 0
errors: No known data errors
When ada0 is replaced, the new disk is remapped to ada0, so ada0p3 is FAULTED because that was the name of the disk before it was removed, and it is ONLINE because it is the old da1p3 remapped to ada0p3 in this last boot.
We check the disks:
<Samsung SSD 870 EVO 1TB SVT02B6Q> at scbus0 target 0 lun 0 (ada0,pass0)
<HL-DT-ST DVD+-RW GS20N A106> at scbus1 target 0 lun 0 (cd0,pass1)
<WD 10EZEX External 1.75> at scbus6 target 0 lun 0 (pass2,da0)
We remove the FAULTED disk from the pool:
Leaving it as follows:
pool: zroot
state: ONLINE
scan: resilvered 90.1G in 03:20:15 with 0 errors on Thu May 19 14:37:34 2022
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
errors: No known data errors
If we check the pool space, it will still have the previous size:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 109G 91.0G 18.0G - 819G 73% 83% 1.00x ONLINE -
We expand the pool:
We check the data again:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 929G 91.0G 838G - - 8% 9% 1.00x ONLINE -
I leave my autoexpand configuration as it may be useful if you read about autoexpand in any forum:
NAME PROPERTY VALUE SOURCE
storage autoexpand off default
zroot autoexpand off default
As a final note, it should be highlighted that a side effect of this migration is that the pool fragmentation has been considerably reduced.