Some random hints for ZFS users..
This assumes you are already fairly familiar with ZFS.
Mounting a pool from another system
Make sure you use the "altroot" property when importing the
pool so the pool doesn't set its own mountpoints for /etc, /usr
and so on and replace those directories on the running system.
Creating a ZPool on a USB drive
If you're wondering "why put ZFS on your USB stick instead of
- ZFS will protect you from silent data corruption. This is
particularly likely for USB drives; portable magnetic drives have to
put up with lots of knocking around and vibration damage and flash
drives can be more vulnerable to wearing out ofter repeated
- ZFS is the first POSIX-like FS which has good read-write cross
platform compatibility between Linux/BSD/Mac. FAT32 has universal
read-write compatibility but no POSIX features (Unix permissions,
symlinks etc). All the Linux/BSD filesystems which have those features
won't work fully on the other OS.
- ZFS has some useful features for setting up a drive or pool which
will be used on multiple machines and administered by unprivileged
users, see the properties below.
For (1) above we need to make sure that we set ZFS to create as many
copies as possible, we do that with the copies property. A list of all
the useful properties for portable pools is below; if
you just want a command that combines them all:
zpool create -o cachefile=none -o delegation=on -O atime=off -O compression=lzjb \
-O copies=3 -O mountpoint=legacy -O setuid=off myusbtank /dev/usbdevice
Once the pool is created, you will want to export
it from the
Mounting and unmounting a USB drive
Assuming you've set it up properly, use zpool import/export to
mount/unmount the drive:
zpool import myusbtank
[zfs mount if you used mountpoint = legacy]
zpool export myusbtank
Useful properties for ZPools on USB or other portable drives
- Do not cache this pool configuration when imported to
- Allow delegation of admin to non-privileged
users, based on dataset access controls. These are set on the dataset
with "zfs allow ..." - see zfs(1) for more details.
Dataset Properties (set on the root dataset with zpool -O when the pool is
created, or other datasets with zfs -o):
- Unless you need to store access times for some
particular reason, switch it off
- Useful if you want to put a dataset on the
drive which is normally not mounted.
- Normally lz4 gives both faster and better
compression, but the versions of ZFS-Fuse in certain Linux distros
*cough* don't have it, so lzjb is best for a maximally compatible pool.
- For a disk which is likely vulnerable to random
sector errors (i.e. virtually any portable drive) using multiple
copies is very important. This can be set lower if you need to
save space more than you need to save the data.
- Note that copies can be set per-dataset, so it is possible to have
a usb drive with a copies=3 dataset for your work and a copies=1
dataset for some videos you want to watch on the train.
- Set where the dataset is mounted.
If you can pick a directory ahead of time which definitely won't clash with an
existing one, use that and the dataset will be mounted in the same
spot wherever you use it (e.g. mountpoint=/bobsHomeVideosDiskThree or
mountpoint=/home/dleigh/uniusbstick). Otherwise, set mountpoint to
"legacy" to choose the directory at mount time.
- Probably desirable for security.
Checksum errors on a Vdev but not the physical disk
This was a thread from freebsd-fs@ which seemed interesting enough
> What does it mean when checksum errors appear on the array (and the
> vdev) but not on any of the disks? ...<snip>
> NAME STATE READ WRITE CKSUM
> vr2 ONLINE 0 0 36
> raidz1-0 ONLINE 0 0 72
> label/vr2-d0 ONLINE 0 0 0
> label/vr2-d1 ONLINE 0 0 0
> <snip remaining>
> errors: 43 data errors, use '-v' for a list
Alan Somers' reply:
The checksum errors will appear on the raidz vdev instead of a leaf
if vdev_raidz.c can't determine which leaf vdev was responsible. This
could happen if [more than the parity level] leaf vdevs return bad
data for the same block, which would also lead to unrecoverable data
errors. I see that you have some unrecoverable data errors, so maybe
that's what happened to you.
Subtle design bugs in ZFS can also lead to vdev_raidz.c being
unable to determine which child was responsible for a checksum error.
However, I've only seen that happen when a raidz vdev has a mirror
child. That can only happen if the child is a spare or replacing
vdev. Did you activate any spares, or did you manually replace a
Generated Sat, 17 Jan 2015 11:14:49 +1100
Copyright © 2002-2014 Dylan Leigh.
[HTML 4.01 Transitional]