Tried ZFS on Linux?

0
6494

Managing ZFS

After making sure that the zfs-fuse daemon is running, we need to have a ZFS pool comprising of one or more devices. We will create a pool, say ‘K7’, representing a group with many users, each having their own filesystems on ‘K7’. A user, say ‘ajc’, will have his own filesystem, which will be mounted under ‘K7’ with the same user name along with the required properties.

zpool create K7 sda10

This will create a pool named ‘K7’ using the /dev/sda10 device. You can also give the full path as /dev/sda10 instead of just sda10. However, it’s not required since zfs-fuse will search for any device by default in this directory. If the -n option is specified after create, then no pool will be created. This will cause just a dry-run, which ends up showing the layout of ZFS after the execution of that command. By issuing the above command, we not only created a pool but also implicitly created a dataset (more specifically, a filesystem) too, which will be mounted by default at location ‘/K7’. It is important to avoid any pool name whose name clashes with directories under / [root directory]. However, if you want to explicitly specify the mount point, say at /mnt/k7 or elsewhere, then execute the following:

zpool create -m /mnt/k7 K7 sda10

…or if pool ‘K7’ already exists:

zfs set mountpoint=/mnt/k7 K7

However, after this, K7 won’t be mounted anywhere. So we need to issue either auto mount on all filesystems by issuing the following command:

zfs mount -a

…or any specific filesystem as:

zfs mount K7

For unmounting we use the unmount option instead of mount in the above commands.

Also, at any point in time, if you want to list all the pools in your system, execute the command given below:

zpool list

The health status of the pool can be checked with the following:

zpool status

This command can take the optional arguments -x and -v for a quick overview and verbose status, respectively.

Since we have created a pool named ‘K7’ along with a filesystem with the same name and mounted it at /mnt/k7, to properly harvest the pool we may need more filesystems suitably named in the pool ‘K7’. This can be achieved by using the dataset specific command zfs rather than the pool command zpool.

For example:

zfs create K7/ajc

…will create a filesystem mounted at a sub-directory ‘ajc’ in a directory where K7 is mounted, which in our case will be /mnt/k7/ajc. Similar to specifying mounting options for pools as mentioned above, filesystems also have options like:

zfs create -o mountpoint=/mnt/k7/ajc K7/ajc

Or if you want to change the mount point of an already created filesystem, use:

zfs set mountpoint=/mnt/k7/ajc K7/ajc

It is quite possible that after some time the space you allocated for the pool may run out. Using the in-built compression can be a temporary, yet ready-made solution for such a situation.

zfs set compression=on K7/ajc

Another way to tackle this is to add devices to the pool with the required device space, which will be added to the space already available.

zpool add K7 sda11

Also, as a counter operation to add, we also have remove to remove any added devices from the pool but with the restriction that removal can be performed only on hot spare (that is, inactive devices made active when the system is degraded) devices.

Like mountpoint and compression, many other properties of a filesystem like ‘quota’, ‘reservation’, etc, can also be set as:

zfs set quota=3G K7/ajc
zfs set reservation=1G K7/ajc

Properties of a filesystem can be viewed using get as follows:

zfs get quota K7/ajc

And to see all properties, issue the following command:

zfs get all K7/ajc

As mentioned earlier, ZFS gives a lot of importance to data validation, which is also called scrubbing, and this can be performed on any of the filesystems using the command scrub:

zpool scrub K7

If at any point you want to see all the commands you issued on pools, use:

zpool history

Or for a particular pool like K7, issue the following:

zpool history K7

Likewise, use iostat to get a count of I/O operations on pools.

Now, for creating a snapshot of any filesystem, we can issue:

zfs snapshot K7@snap1

The snapshot of a filesystem is represented by its name followed by ‘@’ and then the snapshot name. Use the -r option to create snapshots recursively on all filesystems under the specified filesytem, as in what’s shown below:

zfs snapshot -r K7@snap2

Now, after a lot of changes to the filesystem, if you want to go back to a snapshot of the filesystem, issue the rollback command. The -r switch is required, as we have to remove the newer snapshot ‘snap2’ to roll back to ‘snap1’.

zfs rollback -r K7@snap1

Or if the snapshot you are rolling back is the newest of all the snapshots of the filesystem, then use the following:

zfs rollback K7@snap2

As in the listing of pools, datasets (which include fileystems and snapshots) can be displayed using the command given below:

zfs list

The snapshot created can be easily transferred between pools or even between systems using the commands send and recv. The following command will create a new filesystem ‘K7_snap’ under ‘K7’ from the snapshot ‘snap1’:

zfs send K7@snap1 | zfs recv K7/K7_snap

The following command is the same as the one above, but the new filesystem and snapshot will be in a remote system ‘sreejith’:

zfs send K7@snap1 | ssh root@sreejith zfs recv K7/K7_snap

As we know, ZFS is the native filesystem of Solaris and if we want to migrate any pool storage in Solaris to some other OS like Linux, then we’ll have to first export the pool from Solaris or whatever OS in which it was being used, and then import it to the required OS.

zpool export K7

In order to forcefully export ‘K7’, we can use the -f switch with the above command.

The following command will display all importable pools with their name and ID:

zpool import

…and we can import it using the name (or even the ID), issuing the command below:

zpool import K7

And finally, the destroy command is used to destroy a pool or a filesystem. The following destroys the ‘ajc’ filesystem in ‘K7’:

zfs destroy K7/ajc

…while the next command destroys the K7 pool altogether:

zpool destroy K7

Though ZFS on FUSE manages to implement a lot of the features of native ZFS, it is still not complete, as has been pointed out in the status of the project. Since the implementation is in userspace, which has to be linked to the Linux kernel through the FUSE module, the performance and scalability is not at par with the kernel module implementation of other filesystems as of version 0.5. Even then, the project is a nice way to get acquainted with the revolutionary ZFS in operating systems like Linux. However, it is expected that a properly tuned ZFS on FUSE may have a comparable performance to the native filesystems as in the case of NTFS-3G, a freely and commercially available and supported fast handling read/write NTFS driver for Linux, FreeBSD, MacOS, etc.

References

LEAVE A REPLY

Please enter your comment!
Please enter your name here