Skip to content

vmware ESXI nesting in PROXMOX

So I think I’ll try and keep this brief. There’s one big thing that caught me off guard though. The last time I had ESXI running on the server it was my only hypervisor and I moved to 6.0 just to find that they disabled the web GUI interface on the free license. This really left a sour taste in my mouth as it did for many others. We live in the era of html5 for pete’s sake, there’s no reason to require a standalone application just to manage a server. This was the main driving factor in me migrating all my VM’s to proxmox. At the time it wasn’t that difficult as my servers were pretty simplistic, Turnkey solutions or pre-built distros with a couple config files to save. None the less, at the time it felt like a big move.

Things are a little different now. I have 200+gigabytes just on my cloud server, two LAMP stacks and a LEMP stack including where this blog is hosted. More hours went into these servers than any other that I’ve ever deployed. So moving again is just about out of the question… Until I saw….Screenshot from 2016-11-11 18-34-44

Well this just changes everything doesn’t it? The last thing I want to do is migrate my hypervisor again but I have to at least consider it now. The one thing I really missed from ESXI was the system information. I’ve had mixed success in getting the detail and notifications from proxmox, or loading HP’s hp-utils packages and running their web service locally. The fact of the matter is, if a disk on my host were to fail, I wouldn’t know unless I logged into the hp-webservice and manually checked it. That is pathetic, I should really have a better solution. I know that solution is in vmware and it comes with little configuration and administrative overhead. It looks like I’ll be testing ESXI for a couple weeks, and if all goes well I’ll be migrating back over. That’s all for a later date.

 

So obviously, ESXI is nested.

The whole process was straight forward simple stupid. There were a couple tweaks that had to take place.

options kvm ignore_msrs=y > etc/modprobe.d/kvm-intel.conf
options kvm-intel nested=Y ept=Y >> /etc/modprobe.d/kvm-intel.conf

Was required on the proxmox host. The kvm-intel.conf file was non-existent. Other than that, append

args: -machine vmport=off

to the file within /etc/pve/qemu-server/ that matches your vm ID

 

Other than that the only important thing was in setting up the VM for ESXI the CPU type had to be set to host, network type to vmware vmxnet3, and OS type to other. All in all, little to do here.
Even less for the ESXI host. Once installed, enable SSH and

vmx.allowNested = "TRUE" >> /etc/vmware/config

And thats it, reboot and all should be well.
Screenshot from 2016-11-11 20-23-22

There something about a host within within a host that feels like rules are being broken. And well.. they kind of are. Performance here is crap. Part of the reason why is how ESXI is having it’s storage delivered. I don’t want to provision a large chunk of my proxmox host to it, so instead I just pushed it’s raid array to ESXI via nfs. I hate setting up nfs on debian systems (which proxmox is) but it’s still pretty straight forward minus the different package names.

Screenshot from 2016-11-11 19-11-41ESXI associated no problem, except for the speed. Physically, the disk is in the same machine as the ESXI instance, but logically, it’s using the TCP/IP stack to read and write to the volume. This is one of those quirks about virtualization that make it great, and terrible at the same time. So much improvement could be had by omitting some protocols when multiple machines are working on the same host, but anything like a cluster would fall on it’s face without following the rules of the physical world.

Screenshot from 2016-11-11 20-24-50

So this one was pretty simple.. Okay very simple. The moral of the story? This is slow, and vmware may have won my heart back with their truly excellent web gui. Depending on how the testing goes, there may be more to follow.