Using virtual machines to replicate a production environment on a development machine is an important aspect of our development environments at my work. On my own projects, it of course allows me to do the same, but for different clients without complicating my host. Vagrant has become an indispensable tool in managing those VM’s and their configuration. There are, although, downsides to spinning up VM’s to serve your website. Sure, CPU and memory intensive tasks likely don’t perform as well on a VM as they do the host, but that’s not a downside most are concerned with. If you’re like me, it’s disk performance that you’re concerned with, and in particular sharing files between the host and the guest VM.
Research and VirtFS
If you search the internet regarding file sharing performance with virtual machines or vagrant in general, you’ll find articles, opinions, and a few benchmarks. You might find an article by Mitchell Hashimoto, the creator of Vagrant, comparing filesystem performance in virtual machines. He compares Virtualbox shares (using vboxsf), VMware shares (using vmhgfs), NFS with Virtualbox, and native performance on both Virtualbox and VMware virtual machines which would be representative of an rsync approach to file sharing with the guest. In your search, you may also find some sources saying that VirtFS is the fastest sharing mechanism available. This page describes VirtFS as “a new paravirtualized filesystem interface designed for improving passthrough technologies in the KVM environment.” VirtFS uses a protocol called 9P developed with Plan 9, and is based on the VirtIO framework. If you want to use 9P file sharing with your virtual machine, you’re going to have to switch from Virtualbox to libvirt. There is a vagrant plugin called vagrant-libvirt which allows you to use the libvirt provider instead of Virtualbox.
Entering the VirtFS Rabbit Hole
The rsync approach to sharing files with a guest VM gives the best performance in my experience. The comparison article above supports my experience. Unlike the other sharing approaches, there are downsides to the rsync approach. First, the rsync approach relies on watching for changes on the host. Second, it can take time for files to rsync from the host to the guest. These may not seem like issues, but with a large web application like we have at my work, we’ve run into issues with file changes not being detected, the rsync watcher breaking, and files actually getting out of sync. In hopes of eliminating those issues, I did some research and found hope in VirtFS. So I installed vagrant-libvirt, made some changes to our Vagrantfile, and decided to give it a try.
Before diving into my testing with 9p, a quick mention about how I tested it. I used the same tool Mitchell used in the article above, Iozone. You can download the latest tarball and compile the tool for your platform. All of this is on 64-bit linux so that would be linux-AMD64. And before you continue, I’m not posting any nice graphs like Mitchell did, mostly because I couldn’t open the document Iozone creates. I will tell you the speeds I observed and how they compared with different options. I also want to clarify that I state the speeds as reported by Iozone, and I use the values only in comparison. Realistic bandwidth performance will likely be different.
The first setup:
- Fedora 23 host
- CentOS 6.8 guest
- Lenovo T540p, Core i5, 16 GB RAM, physical HDD
The second setup:
- Ubuntu 16.04 host
- Ubuntu 14.04 guest
- Dell XPS 15, Core i7, 16 GB RAM, SSD
Testing Setup 1
First thing you should be aware of is that while most Linux flavors will have 9p compiled and enabled in the kernel, CentOS does not. I’m not positive of the reason, but it has something to do with the RHEL team deciding to purposely leave it out, which is not ideal for setting up a libvirt CentOS guest. It is possible, though, to get 9p working in a CentOS guest; you can follow the recommendations in this forum post. After getting a CentOS libvirt guest up and running, I ran Iozone on it. On the “native” filesystem of the guest, it performed similarly to the results in Mitchell’s article. After that, I tried out 9p. This is where my hopes and dreams were destroyed. According to Iozone, bandwidth was rather stable between 100-200 MBps across it’s testing for both read and write. This was fairly disappointing since I was expecting performance closer to the “native” values. After 9p, I decided to test NFS. The NFS read results were very good; much closer to “native.” The write performance, on the other hand, was terrible; much worse than “native” and 9p. For reference, Iozone on a Virtualbox share gave read/write performance of ~300MBps, so it was clear that 9p just wasn’t fast enough. Undetered, I decided to test real performance with our web application. Our home page load time on a Virtualbox machine with rsync sharing was around 400ms. On NFS, the home page loaded around 1 second and on 9p it took around 3.5 seconds. In summary, 9p and NFS just didn’t give the same performance on libvirt as our Virtualbox+rsync setup.
Testing Setup 2
Since 9p wasn’t enabled (or built-in) in Setup 1, I decided to give 9p one more chance given the good opinions of it online, just to be sure. In short, nothing was different. In fact, despite different hardware, Iozone gave similar results of around 100-200 MBps for 9p.
Whatever you do, don’t get committed to converting to 9p. Test it out first, given the time to do so, and ensure the performance meets your needs. For the large web application that we have at my work, it didn’t perform as well as our rsync setup. Going from 400ms to almost 4 seconds for our home page load time is not something we can swallow. I also want to applaud Virtualbox for their vboxsf performing rather well despite not being as “built-in” as VirtFS. If VirtFS can get faster, it will certainly be something to try out again in the future, especially given it’s simplicity and built-in nature in Linux environments. Going forward, I think I might stick with libvirt in place of Virtualbox since the rsync implementation of file sharing is really no different.