Ambrosia - shawfdong/hyades GitHub Wiki
Ambrosia (hostname: ambrosia.ucsc.edu) is a file server for the Hyades cluster, acquired in November 2013. Its hardware specifications are as follows:
- Two quad-core Intel Xeon E5-2609V2 CPUs at 2.5GHz
- 64GB (8 x 8GB) RAM at 1600MHz
- Two 120GB Intel 320 Series SSDs
- 36 Western Digital RE WD4001FYYG 4TB SAS 6Gb/s nearline hard drives at 7200RPM (32MB Cache)
- LSI MegaRAID SAS 9271-8i 6Gb/s SAS and SATA RAID Controller Card, with 1GB cache
- Intel 10 Gigabit Network Adapter X520-DA2 (E10G42BTDA), with dual SFP+ ports
- Intel I350 Gigabit Ethernet Controller, with dual 1000Base-T ports
- 4U Supermicro Chassis 847BE16-R1K28B
Subnet | IP Address | Netmask | Gateway |
---|---|---|---|
Public 10GbE | 128.114.126.229 | 255.255.255.224 | 128.114.126.238 |
Private GbE | 10.6.7.1 | 255.255.0.0 | |
Private 10GbE | 10.7.7.1 | 255.255.0.0 | |
IPMI | 10.9.7.1 | 255.255.0.0 |
We installed FreeBSD 9.2, in November 2013, on a root ZFS filesystem created on the two Intel SSDs. We closely followed the instructions in the FreeBSD wiki article Installing FreeBSD 9.0 (or later) Root on ZFS using GPT.
Notes:
- The instructions in Installing FreeBSD Root on ZFS (Mirror) using GPT are only applicable to FreeBSD 8.x; thus are outdated;
- There is no support for booting FreeBSD on a ZFS root under UEFI yet. So we'll boot under legacy BIOS; but we'll use GPT instead of MBR.
Download FreeBSD-9.2-RELEASE-amd64-memstick.img (the memory stick image of FreeBSD 9.2 for x86_64) from ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/9.2/
Write the image to a USB stick. On my iMac (running OS X Maverick 10.9), I did the following:
$ diskutil list $ diskutil umount /dev/disk2s2 $ sudo dd if=/dev/zero of=/dev/disk2 bs=64k count=10 $ sudo dd if=FreeBSD-9.2-RELEASE-amd64-memstick.img of=/dev/disk2 bs=64k
Boot FreeBSD from the USB stick.
Go through the initial setup as usual[1].
When the partitioning dialogue in bsdinstall comes up, choose the Shell option.
Create new GPT (GUID Partition Table) on the two Intel SSDs:
# camcontrol devlist <INTEL SSDSA2CW120G3 4PC10362> at scbus1 target 0 lun 0 (ada0,pass0) <INTEL SSDSA2CW120G3 4PC10362> at scbus2 target 0 lun 0 (ada1,pass1) # gpart destroy -F ada0 # gpart destroy -F ada1 # gpart create -s gpt ada0 # gpart create -s gpt ada1
Add partitions for the boot loader and swap then install the protective MBR and gptzfsboot boot loader. All partitions are aligned to 4k for optimal performance with advanced format drives.
# gpart add -s 222 -a 4k -t freebsd-boot -l boot0 ada0 # gpart add -s 8g -a 4k -t freebsd-swap -l swap0 ada0 # gpart add -a 4k -t freebsd-zfs -l disk0 ada0 # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 # gpart add -s 222 -a 4k -t freebsd-boot -l boot1 ada1 # gpart add -s 8g -a 4k -t freebsd-swap -l swap1 ada1 # gpart add -a 4k -t freebsd-zfs -l disk1 ada1 # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
Note: The Intel 320 Series SSDs don't appear to be advanced format drives:
# camcontrol identify ada0 | grep "sector size" sector size logical 512, physical 512, offset 0but using a 4k sector size probably doesn't hurt.
Create virtual devices with 4k sectors, to ensure ZFS uses the correct block size:
# gnop create -S 4096 /dev/gpt/disk0 # gnop create -S 4096 /dev/gpt/disk1
Load the necessary kernel modules:
# kldload zfs
Create the ZFS pool:
# zpool create -o altroot=/mnt -O canmount=off -m none zroot mirror /dev/gpt/disk0.nop /dev/gpt/disk1.nopThis will create a zpool called zroot which will not be mounted. This zpool is only used to derive other file systems from.
Create ZFS filesystem hierarchy:
# zfs set checksum=fletcher4 zroot # zfs set atime=off zroot # zfs create -o mountpoint=none zroot/ROOT # zfs create -o mountpoint=/ zroot/ROOT/default # zfs create -o mountpoint=/home -o setuid=off zroot/home # zfs create -o mountpoint=/tmp -o compression=lz4 -o setuid=off zroot/tmp # chmod 1777 /mnt/tmp # zfs create -o mountpoint=/usr zroot/usr # zfs create zroot/usr/local # zfs create zroot/usr/obj # zfs create -o compression=lz4 -o setuid=off zroot/usr/ports # zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles # zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages # zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/usr/src # zfs create -o mountpoint=/var zroot/var # zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/crash # zfs create -o exec=off -o setuid=off zroot/var/db # zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/var/db/pkg # zfs create -o exec=off -o setuid=off zroot/var/empty # zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/log # zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail # zfs create -o exec=off -o setuid=off zroot/var/run # zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/var/tmp # chmod 1777 /mnt/var/tmp
Notes:
- The fletcher4 checksum algorithm is more robust than the old default fletcher2 algorithm.
- Set atime off to avoid writing a metadata change every time a file is accessed, a serious performance penalty.
- Compression may be set to on, off, lzjb, gzip, gzip-N (where N is an integer from 1 (fastest) to 9 (best compression ratio); gzip is equivalent to gzip-6).
- On FreeBSD 8.4 and 9.2 or later, lz4 compression is also supported, providing the best trade-off (significantly faster compression and decompression with moderately higher compression ratios).
- During installation, the new root file system is mounted under /mnt.
# zpool set bootfs=zroot/ROOT/default zroot
Add the swap devices to fstab, so that they will automatically show up when the system starts:
# cat << EOF > /tmp/bsdinstall_etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/gpt/swap0 none swap sw 0 0 /dev/gpt/swap1 none swap sw 0 0 EOF
Once all of the ZFS filesystems have been created, type exit in the shell and proceed with the installation as normal. When prompted to 'Choose the services you would like to be started at boot', I chose sshd, ntpd, & powerd.
Once the installation is complete, choose Exit from the main menu.
The next dialogue will offer the option to 'open a shell in the new system to make any final manual modifications'. Select Yes.
Configure ZFS:
# mount -t devfs devfs /dev # echo 'zfs_enable="YES"' >> /etc/rc.conf # echo 'zfs_load="YES"' >> /boot/loader.conf
Set read only on /var/empty, which is supposed to be empty at all times:
# zfs set readonly=on zroot/var/empty
To finish the installation, exit the shell, remove the USB stick and choose the Reboot option from the next dialogue.
The next step was to create ZFS filesystems on the 36 4TB SAS hard drives. Those drives are controlled by a hardware RAID controller, namely a LSI MegaRAID SAS 9271-8i 6Gb/s SAS and SATA RAID Controller Card:
# mfiutil show adapter mfi0 Adapter: Product Name: LSI MegaRAID SAS 9271-8i Serial Number: SV34127542 Firmware: 23.18.0-0014 RAID Levels: JBOD, RAID0, RAID1, RAID5, RAID6, RAID10, RAID50 Battery Backup: present NVRAM: 32K Onboard Memory: 1024M Minimum Stripe: 8k Maximum Stripe: 1M
However, ZFS prefers to manage the drives directly by itself, as it offers software RAID through its RAID-Z and mirroring organization schemes. We worked around it by creating a RAID0 volume for each drive.
List all hard drives controlled by the LSI MegaRAID card:
# mfiutil show drives
Create a RAID0 volume for each of 24 drives in the front of the chassis:
# mfiutil create jbod -v e1:s0 e1:s1 e1:s2 e1:s3 e1:s4 e1:s5 \ e1:s6 e1:s7 e1:s8 e1:s9 e1:s10 e1:s11 \ e1:s12 e1:s13 e1:s14 e1:s15 e1:s16 e1:s17 \ e1:s18 e1:s19 e1:s20 e1:s21 e1:s22 e1:s23which created 24 RAID0 volumes mfid0 – mfid23.
Create a RAID0 volume for each of 12 drives in the rear of the chassis:
# mfiutil create jbod -v e2:s0 e2:s1 e2:s2 e2:s3 e2:s4 e2:s5 \ e2:s6 e2:s7 e2:s8 e2:s9 e2:s10 e2:s11 \which created 12 RAID0 volumes mfid24 – mfid35.
Create a zpool with RAIDZ-2 on the 24 front drives:
# zpool create -m none ztrove raidz2 \ mfid0 mfid1 mfid2 mfid3 mfid4 mfid5 \ mfid6 mfid7 mfid8 mfid9 mfid10 mfid11 \ mfid12 mfid13 mfid14 mfid15 mfid16 mfid17 \ mfid18 mfid19 mfid20 mfid21 mfid22 mfid23
Create a zpool with RAIDZ-2 on the 12 rear drives:
# zpool create -m none zhome raidz2 \ /dev/mfid24 /dev/mfid25 /dev/mfid26 /dev/mfid27 /dev/mfid28 /dev/mfid29 \ /dev/mfid30 /dev/mfid31 /dev/mfid32 /dev/mfid33 /dev/mfid34 /dev/mfid35
Create ZFS filesystems:
# zfs set checksum=fletcher4 ztrove # zfs set atime=off ztrove # zfs create -o mountpoint=/export/trove -o setuid=off ztrove/trove # chmod 1777 /export/trove # zfs set checksum=fletcher4 zhome # zfs set atime=off zhome # zfs create -o mountpoint=/export/home -o setuid=off zhome/home # chmod 1777 /export/home
Add the following stanza to /etc/rc.conf:
hostname="ambrosia.ucsc.edu" ifconfig_ix0=" inet 128.114.126.229 netmask 255.255.255.224 -tso" ifconfig_ix1=" inet 10.7.7.1 netmask 255.255.0.0 mtu 9000 -tso" ifconfig_igb0=" inet 10.6.7.1 netmask 255.255.0.0 -tso" defaultrouter="128.114.126.238"Note due to a device driver bug for the Intel 10GbE adapter in FreeBSD 9.2, I turned off TSO (TCP Segmentation Offload). Ambrosia has since been upgraded to FreeBSD 9.3 and the bug may have been fixed.
Apply the settings to the networking system:
# service netif restart # service routing restart
or:
# /etc/rc.d/netif restart && /etc/rc.d/routing restart
To enable starting NFS server at boot time, add these options to /etc/rc.conf[2]:
rpcbind_enable="YES" nfs_server_enable="YES" nfs_server_flags="-u -t -n 256"Ambrosia will serve NFS in both UDP and TCP transports using 256 daemons (nfsd -u -t -n 256).
Start the NFS server:
# service nfsd start
The goal was to export the ZFS filesystems, /export/home & /export/trove, over NFS to both of the private subnets in the Hyades cluster:
- to private GbE subnet 10.6.0.0/16, with root_squash
- to private 10GbE subnet 10.7.0.0/16, with no_root_squash
Export the ZFS filesystems to the private 10GbE subnet:
# zfs set sharenfs="-maproot=root -network=10.7.0.0/16" zhome/home # zfs set sharenfs="-maproot=root -network=10.7.0.0/16" ztrove/trovewhich produced the following /etc/zfs/exports:
# !!! DO NOT EDIT THIS FILE MANUALLY !!! /export/home -maproot=root -network=10.7.0.0/16 /export/trove -maproot=root -network=10.7.0.0/16
The shares are instantaneously exported to the private 10GbE subnet, with no_root_squash (-maproot=root). There is no need to reload/restart mountd.
Manually create /etc/exports:
/export/home -maproot=nobody -network=10.6.0.0/16 /export/trove -maproot=nobody -network=10.6.0.0/16The shares will be exported to the private GbE subnet, with root_squash (-maproot=nobody). However, to make the change to take effect immediately, we have to force mountd to reread /etc/exports:
# service mountd onereload
Let's test it:
# showmount -e Exports list on localhost: /export/trove 10.6.0.0 10.7.0.0 /export/home 10.6.0.0 10.7.0.0
The combination of ZFS and NFS stresses the ZIL to the point that performance falls significantly below expected levels[3]. Let's disable ZIL on the exported datasets:
# zfs set sync=disabled zhome/home # zfs set sync=disabled ztrove/trove
There appears to be some interoperability issue between the FreeBSD NFS server and a Linux NFS client! If we mount the exported NFS share /export/home at /home on a client running RHEL/CentOS 6, without the nolock option, a (t)csh login session will hang for a long time, trying to obtain a lock from the server! On the other hand, bash seems to work fine.
On FreeBSD, the NFS file locking daemon, rpc.lockd, provides monitored and unmonitored file and record locking services in an NFS environment. It typically operates in conjunction with rpc.statd, to monitor the status of hosts requesting locks. Those 2 daemons are optional, and disabled by default. Note that a sane NFS server requires at least 3 services running: rpcbind, mountd and nfsd.
However, even after starting the lockd & statd services one the FreeBSD server, if we again mount the home share without the nolock option, a (t)csh login session will still hang for a long time, unable to obtain a lock!
The solution is to use the nolock option to mount the exported NFS shares on Linux. We use the noatime,nosuid,nolock,soft,retrans=16 options to mount /export/home & /export/trove on Linux:
# mkdir /trove # mount -t nfs -o noatime,nosuid,nolock,soft,retrans=16 10.7.7.1:/export/trove /trove
Combined with the defaults, the above command results in the following mount options:
# grep trove /proc/mounts 10.7.7.1:/export/trove /trove nfs rw,nosuid,noatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,proto=tcp,timeo=600,retrans=16,sec=sys,mountaddr=10.7.7.1,mountvers=3,mountport=758,mountproto=udp,local_lock=all,addr=10.7.7.1 0 0
We use the stateful firewall IPFW on Ambrosia[4].
To configure the system to enable IPFW at boot time, add the following to /etc/rc.conf:
firewall_enable="YES" firewall_script="/etc/ipfw.rules"
Add IPFW ruleset to /etc/ipfw.rules:
#!/bin/sh # Flush out the list before we begin. ipfw -q -f flush # Set rules command prefix cmd="ipfw -q add" pif="ix0" # public network interface $cmd 00001 deny ip from any to any not antispoof in # No restriction on private network interfaces $cmd 00005 allow all from any to any via igb0 $cmd 00008 allow all from any to any via ix1 # No restriction on loopback interface $cmd 00010 allow all from any to any via lo0 # Allow the packet through if it has previously been added to the # the "dynamic" rules table by an allow keep-state statement. $cmd 00025 check-state $cmd 00100 allow tcp from me to any out via $pif setup keep-state $cmd 00150 allow udp from me to any out via $pif keep-state $cmd 00200 allow icmp from me to any out via $pif keep-state $cmd 00299 deny log all from any to any out via $pif # Deny all inbound traffic from non-routable reserved address spaces $cmd 00300 deny all from 192.168.0.0/16 to any in via $pif $cmd 00301 deny all from 172.16.0.0/12 to any in via $pif $cmd 00302 deny all from 10.0.0.0/8 to any in via $pif $cmd 00303 deny all from 127.0.0.0/8 to any in via $pif $cmd 00304 deny all from 0.0.0.0/8 to any in via $pif $cmd 00305 deny all from 169.254.0.0/16 to any in via $pif $cmd 00306 deny all from 192.0.2.0/24 to any in via $pif $cmd 00307 deny all from 204.152.64.0/23 to any in via $pif $cmd 00308 deny all from 224.0.0.0/3 to any in via $pif # Deny public pings $cmd 00310 deny icmp from any to any in via $pif # Deny rpcinfo $cmd 00315 deny tcp from any to any 111 in via $pif # Deny any late arriving packets $cmd 00330 deny all from any to any frag in via $pif # Deny ACK packets that did not match the dynamic rule table $cmd 00335 deny tcp from any to any established in via $pif # Allow in SSH $cmd 00410 allow tcp from any to me 22 in via $pif setup limit src-addr 3 # Reject all incoming connections from the outside $cmd 00499 deny all from any to any in via $pif $cmd 00999 deny log all from any to any
Start the firewall:
# service ipfw start
We experienced a hard drive failure in June 2013. We saw the following error in dmesg:
mfi0: 501114 (455297952s/0x0002/info) - Unexpected sense: PD 12(e0x09/s5) Path 50000c0f0123e536, CDB: 8f 00 00 00 00 00 91 ca 32 56 00 00 10 00 00 00, Sense: 3/11/00
The zpool status command spat out error too:
# zpool status zhome pool: zhome state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM zhome ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 mfid24 ONLINE 0 0 0 mfid25 ONLINE 0 0 0 mfid26 ONLINE 0 0 0 mfid27 ONLINE 0 0 0 mfid28 ONLINE 0 0 0 mfid29 ONLINE 0 0 296 mfid30 ONLINE 0 0 0 mfid31 ONLINE 0 0 0 mfid32 ONLINE 0 0 0 mfid33 ONLINE 0 0 0 mfid34 ONLINE 0 0 0 mfid35 ONLINE 0 0 0 errors: No known data errors
mfid29 was the RAID0 volume for physical drive PD12 (E2:S5). Let's fail it:
# mfiutil fail E2:S5 # mfiutil show drives | grep E2:S5 18 ( 0.0) FAILED <WD WD4001FYYG-01SL3 VR07 serial=WD-WMC1F1253920> SCSI-6 E2:S5 # mfiutil show volumes | grep 29 29 ( 3725G) RAID-0 64k OFFLINE Writes
Now zpool status gave a different error:
# zpool status zhome pool: zhome state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME STATE READ WRITE CKSUM zhome DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 mfid24 ONLINE 0 0 0 mfid25 ONLINE 0 0 0 mfid26 ONLINE 0 0 0 mfid27 ONLINE 0 0 0 mfid28 ONLINE 0 0 0 1791299633 REMOVED 0 0 0 was /dev/mfid29 mfid30 ONLINE 0 0 0 mfid31 ONLINE 0 0 0 mfid32 ONLINE 0 0 0 mfid33 ONLINE 0 0 0 mfid34 ONLINE 0 0 0 mfid35 ONLINE 0 0 0 errors: No known data errors
I then attempted to hot swap the failed drive, but without success!
After removing the failed drive from and inserting a good drive into slot 5 in the rear of the chassis, I got:
# mfiutil show drives | grep E2:S5 46 ( 0.0) UNCONFIGURED GOOD <WD WD4001FYYG-01SL3 VR07 serial=WD-WMC1F1254853> SCSI-6 E2:S5
I no longer saw mfid29 in the output of mfiutil show volumes. However, I failed at any further attempt at bringing the new drive online:
# mfiutil create jbod -v E2:S5 Adding drive 46 to array 35 Adding array 35 to volume 29 mfiutil: Command failed: Status: 0x54 mfiutil: Failed to add volume: Input/output error # mfiutil add E2:S5 mfid29 mfiutil: Invalid volume mfid29: Invalid argument # mfiutil delete mfid29 mfiutil: Invalid volume mfid29: Invalid argument
Perhaps the MegaCLI utility could have worked[5]. I, however, took the easier route of rebooting the server.
When the server was booting, I pressed Ctrl-H to start the LSI WebBIOS configuration utility[6]. I then discarded the preserved cache of the virtual drives in WebBIOS.
When FreeBSD was again booted up, I logged in and found that the virtual volume names were not persistent:
# mfiutil show volumes mfi0 Volumes: Id Size Level Stripe State Cache Name mfid0 ( 3725G) RAID-0 64k OPTIMAL Writes mfid1 ( 3725G) RAID-0 64k OPTIMAL Writes ... mfid28 ( 3725G) RAID-0 64k OPTIMAL Writes mfid29 ( 3725G) RAID-0 64k OPTIMAL Writes mfid30 ( 3725G) RAID-0 64k OPTIMAL Writes mfid31 ( 3725G) RAID-0 64k OPTIMAL Writes mfid32 ( 3725G) RAID-0 64k OPTIMAL Writes mfid33 ( 3725G) RAID-0 64k OPTIMAL Writes mfid34 ( 3725G) RAID-0 64k OPTIMAL WritesNote the new mfid29 – mfid34 were the old mfid30 – mfid35!
The output of zpool status was different too:
# zpool status zhome pool: zhome state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME STATE READ WRITE CKSUM zhome DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 mfid24 ONLINE 0 0 0 mfid25 ONLINE 0 0 0 mfid26 ONLINE 0 0 0 mfid27 ONLINE 0 0 0 mfid28 ONLINE 0 0 0 1791299633 REMOVED 0 0 0 was /dev/mfid29 mfid29 ONLINE 0 0 0 mfid30 ONLINE 0 0 0 mfid31 ONLINE 0 0 0 mfid32 ONLINE 0 0 0 mfid33 ONLINE 0 0 0 mfid34 ONLINE 0 0 0 errors: No known data errors
Now I was able to create a RAID0 volume for the new drive:
# mfiutil show drives | grep E2:S5 46 ( 0.0) UNCONFIGURED GOOD <WD WD4001FYYG-01SL3 VR07 serial=WD-WMC1F1254853> SCSI-6 E2:S5 # mfiutil create jbod -v E2:S5 Adding drive 46 to array 35 Adding array 35 to volume 29 # mfiutil show config ... array 35 of 1 drives: drive 46 ( 0.0) ONLINE <WD WD4001FYYG-01SL3 VR07 serial=WD-WMC1F1254853> SCSI-6 ... volume mfid35 (3725G) RAID-0 64k OPTIMAL spans: array 35
I then ran zpool replace to replace the failed disk in zhome (1791299633 was the GUID of the failed device[7]):
# zpool replace zhome 1791299633 mfid35Note the command I actually used was zpool replace zhome mfi29 mfid35, which succeeded without a hitch!
It took about 4 hours for the resilver to complete:
# zpool status -v zhome pool: zhome state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Fri Jun 6 11:30:11 2014 25.7G scanned out of 4.49T at 347M/s, 3h45m to go 2.11G resilvered, 0.56% done config: NAME STATE READ WRITE CKSUM zhome DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 mfid24 ONLINE 0 0 0 mfid25 ONLINE 0 0 0 mfid26 ONLINE 0 0 0 mfid27 ONLINE 0 0 0 mfid28 ONLINE 0 0 0 replacing-5 REMOVED 0 0 0 1791299633 REMOVED 0 0 0 was /dev/mfid29 mfid35 ONLINE 0 0 0 (resilvering) mfid29 ONLINE 0 0 0 mfid30 ONLINE 0 0 0 mfid31 ONLINE 0 0 0 mfid32 ONLINE 0 0 0 mfid33 ONLINE 0 0 0 mfid34 ONLINE 0 0 0 errors: No known data errors
We upgraded Ambrosia to FreeBSD 9.3 in July 2014.
Upgrade Ambrosia from FreeBSD 9.2 to FreeBSD 9.3[8]:
# freebsd-update -r 9.3-RELEASE upgrade # freebsd-update install
Reboot the machine:
# shutdown -r now
Restart freebsd-update:
# freebsd-update install
Applying Security Patches:
# freebsd-update fetch # freebsd-update install # shutdown -r now
- ^ FreeBSD Handbook - Installing FreeBSD 9.X and Later
- ^ FreeBSD Handbook - Network File System (NFS)
- ^ ZFS Tuning Guide
- ^ FreeBSD Handbook - IPFW
- ^ mfi (Dell H700) + hot swapping doesn't appear to work with RC1
- ^ MegaRAID SAS Software User Guide
- ^ FreeBSD Handbook - zpool Administration
- ^ FreeBSD Handbook - Updating and Upgrading FreeBSD