Follow the steps shown below if you want to be able to run nested virtual machines on RHEL 7 via KVM.
In this particular situation I have a physical Supermicro server that I want to use to host OpenStack.
Note that my home server has Intel Xeon processors, so I first enable nested virtualization via the KVM intel module. AMD procs use a different module.
Also, in order to communicate with your nested VMs you will need to disable reverse path filtering, otherwise RHEL will discard any network packets in order to prevent asymmetric routing. See below.
After a few hours poking around a newly deployed UCS cluster trying to get some basic profiles created and assigned. I realized that I had actually no idea how the KVM is actually supposed to work inside the UCS cluster. Which is funny as this was a subject that we touched on during my DCUDC class. Apparently we did not touch on it enough.
Anyway, before I get ahead of myself, lets review the gear in this cluster.
Now in my network all lights out management ips (ilos, ipmi, etc) are all on one particular vlan, which for the purpose of this post we will call VLAN 100. Non application related infrastructure equipment (servers, virtual hosts) are on another vlan, which we will call VLAN 200. So when the Fabric Interconnets were deployed, I gave them each an ip address on VLAN 200. And once UCS Manager was up and running, I created a KVM ip address pool of unused ip addresses on VLAN 100. Well guess what, this is wrong.
Routing for the KVM addresses is done through the management interfaces on the Fabric Interconects, so unless you are using vlan tagging, your KVM pool must be on the same vlan as the ip addresses assigned to your Fabric Interconnects.
But wait, why is this?
I thought that I could even assign private 192.168.x.x ip addreses to the KVMs as they were only supposed to be managed via the UCS Manager, but this also incorrect.
Navigate to one of your working KVM ip addresses in a web browser and you can access the KVM of the blade outside of UCS Manager. Nice, which is how I actually expected this to work.
Note that I find it rather dumb to have my KVM management ips and Fabric Interconnects on the same vlan as a rule, however since this is how its supposed to work I am going to have to let that one go.
Now, the fact that you can navigate to a specific KVM IP address via a web browser also makes the idea of using a pool of ip addresses silly. Would you not want to hard code the KVM ip address in the service profile so that you always know which server's console you are logging into? Dunno, I am still working on figuring that one out.
This is my second post in focused on forcibily something from XenCenter, as apparently XenCenter has a hard time knowing when to "forget" a device or host on its own. Now while this is annoying, it can be resolved rather easily from the command line.
Now, lets take a step back and reflect for a moment on the first post in this series… you can find it here. Anyway this first post showed us how to remove the orphaned device via the Windows CLI, however I find the process of using the CLI in Windows to be painful, so this post is going to outline a similar process but via the CLI on the XenServer host itself.
Anyway in the previous post I had to force remove a host from XenCenter via the CLI, and in this post I need to remove the "Local Storage", "Removable Storage" and "DVD Drive" that were associated with the orphaned host. These are the items in red below.
By clicking on each one of the items in XenCenter I can see that they are all in a Detached State, plus i can see the UUID of each item. You will need the UUID for the next step.
Now ssh into your master node and run the following
#xe sr-list
Look for the entry with the matching UUID
uuid ( RO) : 64e69b7d-ae97-4d42-c4d8-c260553b31d1 name-label ( RW): Local storage name-description ( RW): host ( RO): <not in database> type ( RO): lvm content-type ( RO): user
Then run the following command to remove it from XenCenter Inventory
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). More Information here.
KVM is now native to RHEL/Centos and has replaced Xen in RHEL 5.4.
What i have seen so far has been pretty interesting, but I have barely scratched the surface of KVM’s inner workings.
Anyway, to clone a virtual machine you need to run the virt-clone command to clone the virtual disk. This automatically creates the virtual machine for you and registers it in virt-manager.
Note that you do not need to tell virt-clone the location of the existing virtual disk. It figures this out on its own based on the name of the existing virtual machine.
What exactly is going on nowadays in the world of Redhat Linux virtualization anyways since Xen has gone the way of the Dodo.Stumbled across this nice overview on another virtualization blog. There are actually several articles on this blog on the subject.