A RAID-Z can provide us with different types of redundancy depending on the chosen level. In this article, we will set up a RAID-Z2 that we will use as data storage. This simplifies its configuration since we won’t have to worry about the OS bootloader.
Before continuing with the article, it is advisable to review this previous one so that all the concepts are clear.
We must bear in mind that it is not recommended to mount a RAID-Z1 because when a failed disk is replaced, the resilvering process is initiated. At this point, the other disks (which will surely have the same age as the failed disk) begin to be stressed with an approximate probability of 8% of failure. If there were a failure, the data would be irretrievably lost. Therefore, we must always choose a RAID with a fault tolerance of one or more disks. In addition, a degraded RAID-Z penalizes performance since the missing disk’s data is calculated from the parity information of the remaining disks.
We must also be clear that a RAID-Z cannot be converted to another type of RAID. That is why it is so important to choose its type well from the beginning.
We check the available disks (it is a CBSD VM with virtio-blk disks):
crw-r----- 1 root operator 0x4f Nov 25 21:54 /dev/vtbd0
crw-r----- 1 root operator 0x51 Nov 25 21:54 /dev/vtbd1
crw-r----- 1 root operator 0x63 Nov 25 21:54 /dev/vtbd2
crw-r----- 1 root operator 0x64 Nov 25 21:54 /dev/vtbd3
crw-r----- 1 root operator 0x66 Nov 25 21:54 /dev/vtbd3p1
crw-r----- 1 root operator 0x67 Nov 25 21:54 /dev/vtbd3p2
crw-r----- 1 root operator 0x68 Nov 25 21:54 /dev/vtbd3p3
crw-r----- 1 root operator 0x69 Nov 25 21:54 /dev/vtbd3p4
crw-r----- 1 root operator 0x65 Nov 25 21:54 /dev/vtbd4
We verify that the system disk is vtbd3:
pool: zroot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
vtbd3p4 ONLINE 0 0 0
errors: No known data errors
We create an empty GPT partition table on the RAID disks:
root@Melocotonazo:~ # gpart create -s GPT vtbd1
root@Melocotonazo:~ # gpart create -s GPT vtbd2
root@Melocotonazo:~ # gpart create -s GPT vtbd4
We add a partition to the vtbd0 disk:
We check that it has been created correctly:
=> 40 2097328 vtbd0 GPT (1.0G)
40 2097328 1 freebsd-zfs (1.0G)
We copy the partition scheme of vtbd0 to the rest of the RAID disks:
root@Melocotonazo:~ # gpart backup vtbd0 | gpart restore -F vtbd2
root@Melocotonazo:~ # gpart backup vtbd0 | gpart restore -F vtbd4
We create the RAID-Z2:
We check its status:
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
vtbd0p1 ONLINE 0 0 0
vtbd1p1 ONLINE 0 0 0
vtbd2p1 ONLINE 0 0 0
vtbd4p1 ONLINE 0 0 0
errors: No known data errors
We remove a disk and check the pool status:
pool: mypool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-4J
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
vtbd0p1 ONLINE 0 0 0
9621144259020512038 FAULTED 0 0 0 was /dev/vtbd1p1
vtbd1p1 ONLINE 0 0 0
vtbd3p1 ONLINE 0 0 0
errors: No known data errors
We add a replacement disk, which is easily recognizable since it has no partitions:
crw-r----- 1 root operator 0x60 Nov 25 22:05 /dev/vtbd0
crw-r----- 1 root operator 0x65 Nov 25 22:05 /dev/vtbd0p1
crw-r----- 1 root operator 0x61 Nov 25 22:05 /dev/vtbd1
crw-r----- 1 root operator 0x62 Nov 25 22:05 /dev/vtbd2
crw-r----- 1 root operator 0x66 Nov 25 22:05 /dev/vtbd2p1
crw-r----- 1 root operator 0x63 Nov 25 22:05 /dev/vtbd3
crw-r----- 1 root operator 0x67 Nov 25 22:05 /dev/vtbd3p1
crw-r----- 1 root operator 0x68 Nov 25 22:05 /dev/vtbd3p2
crw-r----- 1 root operator 0x69 Nov 25 22:05 /dev/vtbd3p3
crw-r----- 1 root operator 0x6a Nov 25 22:05 /dev/vtbd3p4
crw-r----- 1 root operator 0x64 Nov 25 22:05 /dev/vtbd4
crw-r----- 1 root operator 0x6b Nov 25 22:05 /dev/vtbd4p1
We prepare the disk by creating an empty GPT partition table and copying the partition scheme of vtbd0 to the new disk:
root@Melocotonazo:~ # gpart backup vtbd0 | gpart restore -F vtbd1
We replace the failed disk:
We check the pool status:
pool: mypool
state: ONLINE
scan: resilvered 268K in 0 days 00:00:00 with 0 errors on Wed Nov 25 22:08:18 2020
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
vtbd0p1 ONLINE 0 0 0
vtbd1p1 ONLINE 0 0 0
vtbd2p1 ONLINE 0 0 0
vtbd4p1 ONLINE 0 0 0
errors: No known data errors