Vagrant Disc Performance Benchmark - DoSomethingArchive/legacy-website GitHub Wiki
Objective
To measure disc performance of Host disc drive and compare with Vagrant VM, SSHfs reverse mount. Try to improve reverse mount performance by using NFS instead of SSHfs.
Requisite
- MacBook Pro (15-inch, Late 2011)
- Vanilla HDD Seagate Momentus 5400.6 ST9500325ASG
- SSD 250GB Samsung 840 EVO series SATA III (MZ-7TE250BW)
Software
OS X Mavericks 10.9.4
bonnie++
. Host 1.97, VM 1.96. Available through macports, brew, apt-get.vagrant-nfs_guest 0.1.1
NFS reverse mount Vagrant pluginVagrant 1.5.4
OSXFUSE
library version:FUSE 2.7.3
/OSXFUSE 2.6.4
Results
SSHfs is in average 10 times slower than normal HDD and 100 times slower than SSD. However, reverse NFS mount turned out to be even more slower. It is a bit better on reads, but random read is 20 times worse.
Write
Read
Random Seek
Data
Host HDD
bonnie++ -d. -s128M:16k -f0 -mHostHDD
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
HostHDD 128M:16k 50569 7 54890 5 4556956 99 3436 27
Latency 226ms 30431us 25us 135us
Version 1.97 ------Sequential Create------ --------Random Create--------
HostHDD -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 9682 49 457081 100 10944 47 545 3 541013 100 224 1
Latency 185ms 35us 139ms 415ms 14us 446ms
Host SSD
bonnie++ -d. -s128M:16k -f0 -mHostSSD
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
HostSSD 128M:16k 446777 32 446567 22 5454954 100 175009 1047
Latency 3916us 4203us 17us 328us
Version 1.97 ------Sequential Create------ --------Random Create--------
HostSSD -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 19405 98 395767 100 22621 98 6385 44 527564 99 3140 24
Latency 291us 75us 416us 24080us 13us 28925us
Host SSHfs Mount
bonnie++ -d. -s128M:16k -f0 -mSSHFS
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
SSHFS 128M:16k 3088 0 2357 0 24384 3 1070 32
Latency 2024ms 1766ms 129ms 1723ms
Version 1.97 ------Sequential Create------ --------Random Create--------
SSHFS -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 357 11 32491 42 351 2 341 10 2307 3 369 2
Latency 11669us 293us 1508ms 1511ms 41414us 44409us
Host NFS Mount
Experiment with reverse mount driven by NFS instead of SSHfs.
bonnie++ -d. -s128M:16k -f0 -mNFS
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
NFS 1M:512k 21216 7 16714 7 55907 6 51.5 16
Latency 18409us 25449us 13265us 3785ms
Version 1.97 ------Sequential Create------ --------Random Create--------
NFS -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 179 14 5183 13 273 4 172 15 5087 12 257 3
Latency 406ms 1176us 402ms 404ms 1314us 761ms
VM Disc
Benchmark settings on VM are different: VM syncs with real disc in a different manner. Default file size limit used: 2 x RAM size. Also -D
: direct IO (O_DIRECT) and -b
: fsync()
after every write are set.
sudo apt-get install bonnie++
cd /home/dosomething/
bonnie++ -bD -d. -f0 -mVM
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
VM 6G 37580 28 144116 32 673620 93 6767 389
Latency 1021ms 315ms 15988us 18648us
Version 1.96 ------Sequential Create------ --------Random Create--------
VM -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 781 13 +++++ +++ 1359 13 807 14 +++++ +++ 1383 13
Latency 65078us 1084us 3839us 6751us 433us 3318us
VM Host Folder
There's also a difference in setup. There's no need in 2xRAM write: it's synced to disc instant. Plus creation tests are turned off. These don't work because Virtualbox synced folder operations are not atomic.
cd /vagrant/
bonnie++ -r0 -n0 -s128M:16k -b -d. -f0 -mVMHostMount
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
VMHostMoun 128M:16k 111108 52 71007 55 165312 60 9362 758
Latency 20687us 1068us 932us 6929us
VM NFS Synced Folder Experiment
In addition, I experimented with /vagrant
folder mounted from Host to VM (not reverse) using Vagrant NFS instead of Virtualbox synced folder. It turned out to be slower than current setup, so I didn't include this into the charts. I also tried to mount it with TCP instead UDP — and it had become even more slower.
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
VM 128M:16k 20824 3 20774 3 +++++ +++ 10340 96
Latency 819us 454us 473us 22679us
Version 1.96 ------Sequential Create------ --------Random Create--------
VM -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 2055 47 +++++ +++ 2440 48 2017 47 7792 68 3036 38
Latency 6237us 19101us 211ms 5481us 1385us 15970us