In two previous articles we installed FreeBSD onto mirrored pair of disks as well as additional mirror for UFS data partition. gmirror and UFS are mature technologies which are quite stable and efficient, but lack some functionalities expected from modern filesystems. That's where ZFS comes into play. According to FreeBSD Handbook, ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software. More than a file system, ZFS is fundamentally different from traditional file systems. Combining the traditionally separate roles of volume manager and file system provides ZFS with unique advantages. This article assumes FreeBSD has already been installed onto mirrored pair of disks and UFS data partition has already been added. We will now add RAIDZ volume consisting of four SATA disks which can survive failure of one of the disks. Write performance will be increasing by configuring two NVMe disks for ZIL (ZFS Intention Log), while read performance will be increased by adding single NVMe disk for Cache.
Assuming we added four SATA SSDs (ada2 ada3 ada4 and ada5) and three NVMes (nda2 nda3 and nda4) to our existing mirrored setup with OS on nda0 and nda1 (NVMes) and mirrored UFS data partition on ada0 and ada1 (SSDs):
sysctl kern.disks
kern.disks: ada5 ada4 ada3 ada2 ada1 ada0 nda4 nda3 nda2 nda1 nda0
We confgure system to start ZFS at boot time and start it:
sysrc zfs_enable="YES"
service zfs start
We label data disks:
glabel label ZFSDATA0 /dev/ada2
glabel label ZFSDATA1 /dev/ada3
glabel label ZFSDATA2 /dev/ada4
glabel label ZFSDATA3 /dev/ada5
ZIL disks:
glabel label ZIL0 /dev/nda2
glabel label ZIL1 /dev/nda3
And finally cache disk:
glabel label ZCACHE0 /dev/nda4
We create RAIDZ volume from disks labeled for ZFSDATA as well as log volume consisting of two disks labeled for ZIL:
zpool create zfsdata raidz /dev/label/ZFSDATA0 /dev/label/ZFSDATA1 /dev/label/ZFSDATA2 \
/dev/label/ZFSDATA3 log mirror /dev/label/ZIL0 /dev/label/ZIL1
We add disk labeled ZCACHE0 as cache for our newly created pool:
zpool add zfsdata cache /dev/label/ZCACHE0
Although not strictly neccessary, we will now reboot to make sure settings will apply on next boot.
After reboot, verify state of zpool:
zpool status
pool: zfsdata
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zfsdata ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
label/ZFSDATA0 ONLINE 0 0 0
label/ZFSDATA1 ONLINE 0 0 0
label/ZFSDATA2 ONLINE 0 0 0
label/ZFSDATA3 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
label/ZIL0 ONLINE 0 0 0
label/ZIL1 ONLINE 0 0 0
cache
label/ZCACHE0 ONLINE 0 0 0
errors: No known data errors
List of ZFS datasets:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfsdata 224K 5.85T 32.9K /zfsdata
And finally our mount points:
df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/OS-root 992M 277M 635M 30% /
devfs 1.0K 0B 1.0K 0% /dev
/dev/gpt/OS-home 9.7G 48K 8.9G 0% /home
/dev/gpt/OS-usr 1.9G 778M 1.0G 43% /usr
/dev/gpt/OS-usr-local 7.7G 8.0K 7.1G 0% /usr/local
/dev/gpt/OS-var 3.9G 363M 3.2G 10% /var
/dev/gpt/UFSDATA-ufsdata 992G 8.0K 912G 0% /ufsdata
procfs 8.0K 0B 8.0K 0% /proc
tmpfs 16G 4.0K 16G 0% /tmp
zfsdata 5.8T 33K 5.8T 0% /zfsdata
No filesystem formatting, no manual folder creation, no fstab entries - ZFS does it differently
Here's transcript of terminal session: