mergerfs
September 6, 2019 — 18:36

Author: silver  Category: linux storage  Comments: Off

Union filesystem (FUSE) like unionfs, aufs and mhddfs. Merge multiple paths and mount them, similar to concatenating.

Get it here: https://github.com/trapexit/mergerfs or from OS package repository.

Compared to (older) alternatives mergerfs seems very stable over the past months I’ve been using it. It offers multiple options on how to spread the data over the used drives.

Optionally SnapRAID can be used to add parity disk(s) to protect against disk failures (https://www.snapraid.it).

Create/mount pool

Example using 5 devices /dev/sd[b-f]

Disks are already partitioned have a fs

for i in {b..f}; do
  mkdir /mnt/sd${i}1
  mount /dev/sd${i}1 /mnt/sd${i}1 && \
  mkdir /mnt/sd${i}1/mfs
done && \
mkdir /mnt/mergerfs && \
mergerfs -o defaults,allow_other,use_ino /mnt/sd*/mfs /mnt/mergerfs

And here’s the result from ‘df’:

/dev/mapper/sdb1             3.6T  100M  3.5T  1% /mnt/sdb1
/dev/mapper/sdc1             3.6T  100M  3.5T  1% /mnt/sdc1
/dev/mapper/sdd1             3.6T  100M  3.5T  1% /mnt/sdd1
/dev/mapper/sde1             3.6T  100M  3.5T  1% /mnt/sde1
/dev/mapper/sdf1             3.6T  100M  3.5T  1% /mnt/sdf1
mergerfs                      18T  500M  8.5T  1% /mnt/mergerfs

Changing pool

remove old drive from mergerfs pool

xattr -w user.mergerfs.srcmounts -/mnt/data1 /mnt/pool/.mergerfs

add new drive

xattr -w user.mergerfs.srcmounts +/mnt/data4 /mnt/pool/.mergerfs

some other mount options (-o)

  • use_ino make mergerfs supply inodes
  • fsname=example-name name in df
  • no_splice_write fixes page errors in syslog

https://github.com/trapexit/mergerfs#mount-options

Pool info

xattr -l /mnt/mergerfs/.mergerfs
# or:
mergerfs.ctl -m /mnt/mergerfs list values

mergerfs.ctl -m /mnt/mergerfs info

ZFS
March 21, 2015 — 15:40

Author: silver  Category: storage  Tags: , ,   Comments: Off

New zpool:

zpool create data /dev/aacd0p1.eli
zpool add data cache ada1p2
zpool add data log ada1p1

Tuning (bsd):

zboot/loader.conf
/boot/loader.conf

zfs_load="YES"

1G:


vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"

330M:


vm.kmem_size="330M"
vm.kmem_size_max="330M"


vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"

Send/receive using SSH:

LVM
July 9, 2014 — 15:09

Author: silver  Category: linux storage  Comments: Off

Resize:

vgextend vg_name /dev/sdb1
lvcreate -n /dev/VolGroup/lv_pstorage -l 100%FREE
lvresize --size -8G /dev/VolGroup/lv_root
lvresize --size -35G /dev/VolGroup/lv_vz
lvresize --size -5G /dev/VolGroup/lv_pstorage
lvresize --size +5G /dev/VolGroup/lv_vz
lvextend -l +100%FREE /dev/centos/data

(after extend: resize2fs)

Rescue:

Boot your rescue media.
Scan for volume groups:

# lvm vgscan -v

Activate all volume groups:

# lvm vgchange -a y

List logical volumes:

# lvm lvs –all

With this information, and the volumes activated, you should be able to mount the volumes:

# mount /dev/volumegroup/logicalvolume /mountpoint