Proxmox’s GUI makes creating and running Virtual Environments (VE) such as Virtual Machines (VM) and containers incredibly easy, as well as managing various disks.
One area where I found some difficulty was with getting a ZFS pool to be used as a backup destination for some of my VMs. I have two 1 TB hard drives that are both over 10 years old. Yes, that’s 10 years old, and with no errors! One is a Seagate ST500DM005 and the other is a Western Digital WD10EADS-65L5B1. While the models and manufacturers are a mismatch, they can still be added as a ZFS pool as their sizes are the same.
The age of the drives doesn’t worry me as these two drives are in a mirrored ZFS pool, if one fails the data is still save on the other. Additionally, I have created two NFS shares on my TrueNAS system where backups are also being handled. That’s 3 separate backup destinations for these VMs.
Like I said above, Proxmox doesn’t allow you to create a backup of a VM to a ZFS pool within the GUI. However, with some help of the r/Proxmox Reddit group and the Proxmox Documentation I was able to get this new ZFS pool to be used as a backup destination by creating a dataset under the zpool and mounting it as a directory.
Create the ZFS Pool
First, the pools need to be created. This part can be done in the Proxmox GUI, or in the command line. I’ll show how to do both here.
You’ll need to decide on what RAIDZ level you want to use. Levels require a minimum number of disks and have various fault tolerances. This RAIDZ calculator site has an excellent summary and comparison to help.
In my situation, I only have to 1 TB drives. Therefore, its best for me to use a Mirror approach. This mean the data is the same on both disks. If one disk fails, I still have one working.
GUI ZFS Pool Creation
Under Datacenter > Your Node > Disks > ZFS, select Create ZFS. Enter the name as you wish, and then select the available devices you’d like add to your pool, and the RAID Level. Select create.
If all is well, this new pool will appear under your ZFS storage.
CLI ZFS Pool Creation
If you choose to use the CLI option, we need to find out what the device paths are for our two disks. the
lsblk command will display all the block devices, which are all the disks.
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.6G 0 disk sdb 8:16 0 931.5G 0 disk ├─sdb1 8:17 0 931.5G 0 part └─sdb9 8:25 0 8M 0 part sdc 8:32 0 931.5G 0 disk ├─sdc1 8:33 0 931.5G 0 part └─sdc9 8:41 0 8M 0 part sdd 8:48 0 111.8G 0 disk ├─sdd1 8:49 0 1007K 0 part ├─sdd2 8:50 0 512M 0 part └─sdd3 8:51 0 111.3G 0 part sde 8:64 0 111.8G 0 disk ├─sde1 8:65 0 1007K 0 part ├─sde2 8:66 0 512M 0 part └─sde3 8:67 0 111.3G 0 part zd0 230:0 0 16.5G 0 disk zd16 230:16 0 16.5G 0 disk zd32 230:32 0 16.5G 0 disk zd48 230:48 0 16.5G 0 disk zd64 230:64 0 120G 0 disk └─zd64p1 230:65 0 120G 0 part zd80 230:80 0 500G 0 disk ├─zd80p1 230:81 0 1M 0 part └─zd80p2 230:82 0 500G 0 part nvme0n1 259:0 0 931.5G 0 disk ├─nvme0n1p1 259:2 0 931.5G 0 part └─nvme0n1p9 259:3 0 8M 0 part nvme1n1 259:1 0 931.5G 0 disk ├─nvme1n1p1 259:4 0 931.5G 0 part └─nvme1n1p9 259:5 0 8M 0 part
sdb and sdc are the only two hard disks that appear as 1 TB (actually 931.5 GB). The other two TB disks are two ZFS mirrored NVME drives that I use for my VM pool (labeled below as vm_pool later on) where my Virtual Machines exist.
The command says that we’ll create a pool titled “hdd_pool” which will be a mirror that uses the sdb and sdc disks.
Verify with zpool list:
zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT hdd_pool 928G 116K 928G - - 0% 0% 1.00x ONLINE - rpool 111G 17.2G 93.8G - - 2% 15% 1.00x ONLINE - vm_pool 928G 159G 769G - - 7% 17% 1.00x ONLINE -
So we can see the hdd_pool has been created.
Creating and Mounting a ZFS Dataset
Now that the pool has been created, we need to create a new dataset that is mounted to a directory. It is a best practice to have a separate dataset for each VM. See this Reddit thread for that explanation.
Here, we’re going to create a new directory under /mnt/ and then two directories under that.
mkdir /mnt/hdd_pool_backups mkdir /mnt/hdd_pool_backups/docker_awme mkdir /mnt/hdd_pool_backups/win10_vm
As you can see here, I created a directory called “hdd_pool_backups” and then two directories under that titled “docker_awme” and “win10_vm” and that can be verified like this:
ls /mnt hdd_pool_backups hostrun pve
Next, we need to create the datasets under the pools using
zfs create hdd_pool/backups zfs create hdd_pool/backups/docker_awme -o mountpoint=/mnt/hdd_pool_backups/docker_awme zfs create hdd_pool/backups/win10_vm -o mountpoint=/mnt/hdd_pool_backups/win10_vm
What this does is we are creating a dataset under the “hdd_pool” titled backup. Under that /backup/ dataset, we’re creating two datasets that point to their respective directory created in the previous step.
How that looks:
zfs list NAME USED AVAIL REFER MOUNTPOINT hdd_pool 222K 899G 25K /hdd_pool hdd_pool/backups 72K 899G 24K /hdd_pool/backups hdd_pool/backups/docker_awme 24K 899G 24K /mnt/hdd_pool_backups/docker_awme hdd_pool/backups/win10_vm 24K 899G 24K /mnt/hdd_pool_backups/win10_vm rpool 17.2G 90.4G 104K /rpool rpool/ROOT 17.2G 90.4G 96K /rpool/ROOT rpool/ROOT/pve-1 17.2G 90.4G 17.2G / rpool/data 96K 90.4G 96K /rpool/data vm_pool 816G 83.4G 96K /vm_pool vm_pool/vm-100-disk-0 624G 586G 93.3G - vm_pool/vm-100-state-a20210401_2151CST 17.0G 94.1G 6.27G - vm_pool/vm-100-state-a20210403_1410CST 17.0G 96.4G 3.94G - vm_pool/vm-100-state-a20210403_1531CST 17.0G 95.1G 5.26G - vm_pool/vm-100-state-a20210403_2031CST 17.0G 95.7G 4.63G - vm_pool/vm-150-disk-0 124G 190G 17.2G -
You can see that “hdd_pool/backups/docker_awme” and “hdd_pool/backups/win10_vm” were created and have their respective mountpoints.
Create Directories in Proxmox
We’re getting close!
Now we just need to create a Directory storage under the Datacenter. Navigate to Datacenter > Storage > Add > Directory and use the following information:
- ID: name – I’m using hdd_pool_backup_docker_awme and hdd_pool_backup_win10_vm
- Directory: <absolute dataset path> – This is the zfs dataset path, not the system file path. There should also be a leading forward slash (“/”) such as /hdd_pool/backups/docker_awme
- Content: Backup
- Set Backup Retention as needed
Once you hit create, the directory will appear in the list.
Create the Backup Job
The final step!
Now you can create a Backup: Datacenter > Backup > Add. Then under Storage, you should see the new directory that was created in the previous step. You can select all the other options that are unique to your situation.
To double-check that we’re in good shape, I went ahead and clicked on “Run now” just to make sure the that backup is indeed working as expected. Under the Tasks at the bottom of your Proxmox GUI, you can click on the current running tasks to see the progress of the backup. Once complete, you can also check the ZFS path in the command line to see the final result:
ls -lha /hdd_pool/backups/docker_awme/dump total 55G drwxr-xr-x 2 root root 4 Apr 11 05:49 . drwxr-xr-x 3 root root 3 Apr 11 05:23 .. -rw-r--r-- 1 root root 12K Apr 11 05:49 vzdump-qemu-100-2021_04_11-05_26_21.log -rw-r--r-- 1 root root 55G Apr 11 05:49 vzdump-qemu-100-2021_04_11-05_26_21.vma.zst
Boom! 55 GB has been saved. If you wanted to know why it’s so large, this is a Docker virtual machine (bet you could have guessed that from some of the pool and directory names), that has many containers and their respective images and volumes.
So there you go, now you can use a ZFS pool to backup your VMs on Proxmox.