HOWTO install Ubuntu to a Native ZFS Root Filesystem - atomjack/zfs GitHub Wiki
This page is a fork of the upstream instructions.
These instructions are for Ubuntu. The procedure for Debian, Mint, or other distributions in the DEB family is similar but not identical.
Warnings
- This HOWTO uses a whole physical disk.
- Do not use these instructions for dual-booting.
- Backup your data. Any existing data will be lost.
System Requirements
- 64-bit Ubuntu 13.04 Raring Live CD (not the alternate installer)
- 64-bit computer (amd64, a.k.a. x86_64) computer
- 8 GB disk storage available
- 2 GiB memory minimum
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you wish to use deduplication, you will need massive amounts of RAM. Enabling deduplication is a permanent change that cannot be easily reverted.
Latest Tested and Recommended Versions
- Ubuntu 13.04 Raring Ringtail
- spl-0.6.1
- zfs-0.6.1
Step 1: Prepare The Install Environment
1.1 Boot the Ubuntu Live CD, select Try Linux, and open a terminal.
1.2 Become root:
$ sudo -i
1.3 (Optional) Install openssh-server. If you are installing to a system that is not your daily driver (i.e., storage server in your closet, or to VMWare) it can be helpful to install ssh server to allow remote access (which also provides the ability to copy and paste commands from these instructions). You must also set the password for the ubuntu user, and use the ifconfig command to get the IP address of the server:
# apt-get --yes install openssh-server
# passwd ubuntu
# ifconfig | grep inet
If you see "start: Unknown job: ssh" Run the following:
# initctl reload-configuration
# service ssh start
1.4 Install ZFS in the Live CD environment:
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install --yes debootstrap ubuntu-zfs
Step 2: Disk Formatting
2.1 Create the root pool on your disk:
# zpool create -f -o ashift=9 \
-O atime=off -O canmount=off -O compression=on -O mountpoint=/ -O normalization=formD rpool \
/dev/disk/by-id/scsi-SATA_disk1
Always use the long /dev/disk/by-id/* aliases with ZFS. Using the /dev/sd* device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
This tutorial deviates from the upstream instructions by recommending GPT disk labels instead of (legacy) MBR partitioning. If you have problems booting, it could be due to UEFI firmware bugs.
Hints:
# ls -la /dev/disk/by-id
will list the aliases.- Are you doing this in a virtual machine? If your virtual disk is missing from
/dev/disk/by-id
, read the troubleshooting section. - The root pool can be a mirror. For example,
zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1 /dev/disk/by-id/scsi-SATA_disk2
. - The pool name is arbitrary. On systems that can automatically install to ZFS, the root pool is named "rpool" by default.
Warning: Do not use lz4 compression. GRUB does not support it at this time.
Warning: The grub2-1.99 package currently published in the PPA for Precise does not reliably handle a 4k block size, which is ashift=12
. That's why ashift=9
is specified above.
2.2 Export the pool:
# zpool export rpool
2.3 Create a BIOS Boot Partition:
# parted -s /dev/disk/by-id/scsi-SATA_disk1 mkpart GRUB2 48s 2047s
Ignore the error about alignment.
# parted -s /dev/disk/by-id/scsi-SATA_disk1 set 2 bios_grub on
Step 3: System Installation
3.1 Import the pool:
# zpool import -d /dev/disk/by-id -R /mnt rpool
This implicitly and temporarily mounts the new filesystem hierarchy on /mnt
.
3.2 Create a "ROOT" filesystem in the root pool:
# zfs create -o mountpoint=none rpool/ROOT
3.3 Create a descendant filesystem for the Ubuntu system:
# zfs create -o mountpoint=/ rpool/ROOT/ubuntu
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through pkg image-update
or beadm
. Similar functionality for APT is possible but currently unimplemented for Ubuntu.
3.4 Set the bootfs
property on the root pool:
# zpool set bootfs=rpool/ROOT/ubuntu rpool
The boot loader uses this property to find and start the operating system. The "bootfs" property name is not arbitrary.
3.5 Create datasets for improved security and manageability:
# zfs create -o setuid=off rpool/home
# zfs create -o mountpoint=/root rpool/home/root
# zfs create rpool/srv
# zfs create -o canmount=off -o exec=off -o setuid=off rpool/var
# zfs create rpool/var/cache
# zfs create rpool/var/log
# zfs create rpool/var/mail
# zfs create rpool/var/spool
# zfs create -o exec=on rpool/var/tmp
# zfs create -o compression=off rpool/var/cache/apt
Warning: Do not create a ZFS dataset for /tmp. The system will crash on boot.
3.6 Install the minimal system:
# chmod 1777 /mnt/var/tmp
# debootstrap raring /mnt
# zfs set devices=off rpool
The debootstrap
command leaves the new system in an unconfigured state. An alternative to using debootstrap
is to copy the entirety of a working Ubuntu system into the new ZFS root.
Step 4: System Configuration
4.1 Configure the hostname (change "ubuntu" to the desired hostname).
# cp /etc/hosts /mnt/etc/
# vi /mnt/etc/hostname
# vi /mnt/etc/hosts
Hint: Use nano if you find vi confusing.
4.2 Edit the /mnt/etc/network/interfaces
file so that it contains something like this:
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Customize this file if the new system is not a DHCP client on the LAN.
4.3 Bind the virtual filesystems from the LiveCD environment to the new system and chroot
into it:
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
Note: This is using --rbind, not --bind. If you use --bind on /dev, you'll get errors about /dev/pts later. Those errors are harmless and can be ignored.
4.4 Configure a basic system environment:
# locale-gen en_US.UTF-8
Even if you prefer a non-English system language, always ensure that en_US.UTF-8 is available.
# dpkg-reconfigure tzdata
# vi /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu raring main
deb http://security.ubuntu.com/ubuntu raring-security main
deb http://archive.ubuntu.com/ubuntu raring-updates main
# apt-get update
# apt-get install --yes ubuntu-minimal software-properties-common wget bsd-mailx patch
4.5 Install ZFS in the chroot environment for the new system:
Note: The stable ppa now has support for mirrored root pools.
Note: There is a bug when using a mirrored root pool (and possibly raid-zX?) that will prevent the system from booting properly. See #96. Additional steps are required if a mirrored root pool is used.
# apt-add-repository --yes ppa:zfs-native/stable
# apt-add-repository --yes ppa:zfs-native/grub
# apt-get update
# apt-get install --yes --no-install-recommends linux-generic linux-headers-generic
Ignore this error:
df: cannot read table of mounted file systems: No such file or directory
# apt-get install --yes ubuntu-zfs
# apt-get install --yes grub-pc zfs-initramfs
Warning: This is the second time that you must wait for the SPL and ZFS modules to compile. Do not try to skip this step by copying anything from the host environment into the chroot environment.
Hint: For a virtual machine, you can save a little disk space by installing linux-virtual
and linux-headers-virtual
in place of the regular -generic
kernel packages.
4.6 Configure ZFS monitoring:
# wget -q -O /etc/cron.hourly/zfs-check https://github.com/rlaager/zfs/wiki/zfs-check.sh
# wget -q -O /etc/cron.monthly/zfs-scrub https://github.com/rlaager/zfs/wiki/zfs-scrub.sh
# chmod 755 /etc/cron.hourly/zfs-check /etc/cron.monthly/zfs-scrub
4.7 Configure user access to read-only ZFS commands via sudo:
# wget -q -O /etc/sudoers.d/zfs https://github.com/rlaager/zfs/wiki/zfs.sudoers
# chmod 440 /etc/sudoers.d/zfs
4.8 Add a user account on the new system:
# addgroup --system fuse
# addgroup --system lpadmin
# addgroup --system sambashare
# addgroup --system scanner
# zfs create rpool/home/YOURUSERNAME
# adduser YOURUSERNAME
# cp -a /etc/skel/.[!.]* /home/YOURUSERNAME
# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
# usermod -a -G adm,audio,dialout,dip,cdrom,floppy,fuse,lpadmin,plugdev,sambashare,scanner,sudo,video YOURUSERNAME
Step 5: GRUB Installation
The grub-common and grub-pc packages from the zfs-grub repository should already be installed, but automatic configuration may fail. Manually install GRUB again to be certain that the system is bootable.
5.1 Verify that the ZFS root filesystem is recognized:
# grub-probe /
zfs
5.2 Verify that the ZFS modules are installed:
# ls /boot/grub/zfs*
/boot/grub/zfs.mod /boot/grub/zfsinfo.mod
In newer versions of Ubuntu, these are in /boot/grub/i386-pc/zfs*
# ls /boot/grub/i386-pc/zfs*
/boot/grub/i386-pc/zfs.mod /boot/grub/i386-pc/zfsinfo.mod
Otherwise, check the troubleshooting notes for GRUB below.
5.3 Refresh the initrd files:
# update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-3.2.0-48-generic
5.4 Optional (but highly recommended): Make debugging GRUB easier:
# vi /etc/default/grub
Comment out GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from GRUB_CMDLINE_LINUX_DEFAULT
Uncomment GRUB_TERMINAL=console
Save and quit.
Later, once the system has rebooted twice and you're sure everything is working, you can undo these changes, if desired.
5.5 Update the boot configuration:
# rm /boot/grub/grubenv
# wget -q -O - https://github.com/rlaager/zfs/wiki/00_header.diff | patch /etc/grub.d/00_header
# update-grub
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.2.0-48-generic
Found initrd image: /boot/initrd.img-3.2.0-48-generic
done
5.6 Install the boot loader to the MBR like this:
# grub-install $(readlink -f /dev/disk/by-id/scsi-SATA_disk1)
Installation finished. No error reported.
Note: The readlink is required because recent GRUB releases do not dereference symlinks.
Do not reboot the computer until you get exactly that result message. Note that you are installing the loader to the whole disk, not a partition.
If you're creating a mirror, repeat the grub-install command for each disk in the pool.
Step 6: Cleanup and First Reboot
6.1 Snapshot the initial installation
# zfs snapshot rpool/ROOT/ubuntu@install
In the future, you'll likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the chroot
environment back to the LiveCD environment:
# exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems:
# service upstart-udev-bridge stop
# umount /mnt/dev/pts
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys/fs/fuse/connections
# umount /mnt/sys/kernel/debug
# umount /mnt/sys/kernel/security
# umount /mnt/sys
# umount /mnt
# zpool export rpool
If some of these commands fail, just keep going. The cause is still under investigation.
6.4 Reboot:
# reboot
6.5 Wait for the newly installed system to boot normally. Login using the account you created.
6.6 Configure swap.
6.7 Upgrade and install software:
$ sudo apt-get dist-upgrade --yes
$ sudo apt-get install --yes ubuntu-standard
- or for a full GUI environment: -
$ sudo apt-get install --yes ubuntu-desktop
Caveats and Known Problems
This is an experimental system configuration.
The upstream version of this document was first published in 2010 to demonstrate that the lzfs
implementation made ZoL 0.5 feature complete. Upstream integration efforts began in 2012, and it will be at least a few more years before this kind of configuration is even minimally supported.
Gentoo, and its derivatives, are the only Linux distributions that are currently mainlining support for a ZoL root filesystem.
zpool.cache
inconsistencies cause random pool import failures.
The /etc/zfs/zpool.cache
file embedded in the initrd for each kernel image must be the same as the /etc/zfs/zpool.cache
file in the regular system. Run update-initrd -c -k all
after any /sbin/zpool
command changes the /etc/zfs/zpool.cache
file.
This will be a recurring problem until issue zfsonlinux/zfs#330 is resolved.
Every upgrade can break the system.
Ubuntu systems remove old dkms modules before installing new dkms modules. If the system crashes or restarts during a ZoL module upgrade, which is a failure window of several minutes, then the system becomes unbootable and must be rescued.
This will be a recurring problem until issue zfsonlinux/pkg-zfs#12 is resolved.
Troubleshooting
MPT2SAS
Most problem reports for this tutorial involve mpt2sas
hardware that does slow asynchronous drive initialization, like some IBM M1015 or OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to the Linux kernel until after the regular system is started, and ZoL does not hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.
Most LSI cards are perfectly compatible with ZoL, but there is no known fix if your card has this glitch. Please use different equipment until the mpt2sas
incompatibility is diagnosed and fixed, or donate an affected part if you want solution sooner.
Areca
Systems that require the arcsas
blob driver should add it to the /etc/initramfs-tools/modules
file and run update-initramfs -c -k all
.
Upgrade or downgrade the Areca driver if something like RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20
appears anywhere in kernel log. ZoL is unstable on systems that emit this error message.
GRUB
Verify that the PPA for the ZFS enhanced GRUB is installed:
# apt-add-repository ppa:zfs-native/grub
# apt-get update
Reinstall the zfs-grub
package, which is an alias for a patched grub-common
package:
# apt-get install --reinstall zfs-grub
Afterwards, this should happen:
# apt-cache search zfs-grub
grub-common - GRand Unified Bootloader (common files)
# apt-cache show zfs-grub
N: Can't select versions from package 'zfs-grub' as it is purely virtual
N: No packages found
# apt-cache policy grub-common zfs-grub
grub-common:
Installed: 1.99-21ubuntu3.9+zfs1~precise1
Candidate: 1.99-21ubuntu3.9+zfs1~precise1
Version table:
*** 1.99-21ubuntu3.9+zfs1~precise1 0
1001 http://ppa.launchpad.net/zfs-native/grub/ubuntu/precise/main amd64 Packages
100 /var/lib/dpkg/status
1.99-21ubuntu3 0
1001 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages
zfs-grub:
Installed: (none)
Candidate: (none)
Version table:
For safety, grub modules are never updated by the packaging system after initial installation. Manually refresh them by doing this:
# cp /usr/lib/grub/i386-pc/*.mod /boot/grub/
If the problem persists, then open a bug report and attach the entire output of those apt-get
commands.
Packages in the GRUB PPA are compiled against the stable PPA. Systems that run the daily PPA may experience failures if the ZoL library interface changes.
Note that GRUB does not currently dereference symbolic links in a ZFS filesystem, so you cannot use the /vmlinux
or /initrd.img
symlinks as GRUB command arguments.
VMware
- Set
disk.EnableUUID = "TRUE"
in the vmx file or vsphere configuration. Doing this ensures that/dev/disk
aliases are created in the guest.
QEMU/KVM/XEN
-
In the
/etc/default/grub
file, enable theGRUB_TERMINAL=console
line and remove thesplash
option from theGRUB_CMDLINE_LINUX_DEFAULT
line. Plymouth can cause boot errors in these virtual environments that are difficult to diagnose. -
Set a unique serial number on each virtual disk. (eg:
-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890
)
Kernel Parameters
The zfs-initramfs
package requires that boot=zfs
always be on the kernel command line. If the boot=zfs
parameter is not set, then the init process skips the ZFS routine entirely. This behavior is for safety; it makes the casual installation of the zfs-initramfs package unlikely to break a working system.
ZFS properties can be overridden on the the kernel command line with rpool
and bootfs
arguments. For example, at the GRUB prompt:
linux /ROOT/ubuntu/@/boot/vmlinuz-3.2.0-48-generic boot=zfs rpool=AltPool bootfs=AltPool/ROOT/foobar-3
System Recovery
If the system randomly fails to import the root filesystem pool, then do this at the initramfs recovery prompt:
# zpool export rpool
: now export all other pools too
# zpool import -d /dev/disk/by-id -f -N rpool
: now import all other pools too
# mount -t zfs -o zfsutil rpool/ROOT/ubuntu-1 /root
: do not mount any other filesystem
# cp /etc/zfs/zpool.cache /root/etc/zfs/zpool.cache
# exit
This refreshes the /etc/zfs/zpool.cache
file. The zpool
command emits spurious error messages regarding missing or corrupt vdevs if the zpool.cache
file is stale or otherwise incorrect.