Vmware VCenter Virtual Appliance – Death to Windows.. I think.

6a00d8341c77ee53ef01630028e663970dI am, and have always been, a Linux/Unix Administrator, until VMware forced me to learn a bit of Windows.

This, above all, has been the very WORST part about Vmware. Just to run Virtual Center, I had to BUY Windows, I had to Install Windows, and I had to try to learn Windows.  Oh, and then I had to slap some sort of virus protection on it, and figure out how to patch it. And then worst of all try to authenciate to it, as I am not administering Active Directory. No, I use Openldap, as any Linux/Unix admin would.

Then on top of that I needed Windows to even run the VCenter Client and Connect to my VCenter Server.  Well hell, I don't run Windows, not even on my desktop, not even on my laptop… not at home and not in the office. Its been this way for 5 years and VMware, is not going to make me change this.

Now, flash forward to VSphere 5.1, and low and behold, what is this. A real, fully functioning, web client for VCenter. Ok now we are getting somewhere. Now, I have not had a ton of experience with it, but my first impressions are pretty good. Its flashy, its fast, and it appears to be fully functioning.

To further make my day, the Vmware VCenter Appliance is now no longer beta (this is probably old news to most). For those who are not in the know, its Linux (albeit Suse).  So I am downloading it now and am going to give it a spin. How fully functioning is it… I have no idea. Do I need a database somewhere or does it have one built in? I do know it can connect to oracle, which is really neat, but not as neat as postgres or mysql. Also, how awesome would it be if I could configure the thing to use Openldap for auth?

Dunno all the answers yet, but I should have more insight on the topic soon, so stand by.

 

Related articles

vCenter Server 5: Important Tip while Installation

Xenserver: How To Create A Custom Kickstart Template via the CLI

100-Frankenstein-Smiley-Free-Halloween-Vector-Clipart-IllustrationIf you are reading this post, then you should know that I have been spending a lot of time as of late trying to learn XenServer, and I am doing my best to get Xenserver to do my evil bidding.

When I first took a look at XenServer I was dissapointed to find that you cannot PXE boot a VM unless you use the "Other Install Media" Template. However, when you use this template you are not building a fully paravirtualized vm, and you loose some functionality on your vm (like hot adding a virtual disk).

So lets say you want to kickstart a Centos 5 64-bit vm. Traditionally in XenServer,  you need to create a new vm based on the "Centos 5 (64-bit)" template and then point your vm to your kickstart media and ks config file. Being that this is a manual process, and I am trying to automate building virtual machines, I started searching for a better way to make Xenserver do what I wanted… I think I may have accomplished my goal.

So the first thing I did was create a new vm via cli. This step spits out a UUID for your new vm.

#xe vm-install template=CentOS\ 5\ \(64-bit\) new-name-label=Centos5.4_Kickstart

Now setup your boot params to point your new vm to your kickstart config file

#xe vm-param-set uuid=0415bc6c-6129-9bc2-26d7-e15625cf85a1 PV-args="ks=http://<my_kickstart_server>/kickstart/ks/centos5-u4_x86_64.cfg ksdevice=eth0"

Then tell your new vm where to find its install DVD.

#xe vm-param-set uuid=6aaf7e10-59e4-9895-9c7b-6eee32eab9f1 other-config:install-repository=http://<my_kickstart_server>/centos5-u4-x86_64/disc1/

Now figure out the UUID of your Kickstart VLAN

#xe network-list

Create a VIF (virtual interface) on your kickstart vlan.

#xe vif-create vm-uuid=0415bc6c-6129-9bc2-26d7-e15625cf85a1 network-uuid=f5a61f5b-f17c-ac40-0995-c41c3a5a3ea3 device=0

Now on the next step i cheated – I used XenCenter to quick create a vm based on my new template.

Now when I create a new vm from this template, it startes to kickstart on boot. My next steps are to create multiple templates, each based on my different kickstart images/configs. Then figure out how to set their ips and hostnames.

Hopefully that post is coming soon.

 

Cloning A Virtual Machine in KVM

4794-96-20090910021400 KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). More Information here.

KVM is now native to RHEL/Centos and has replaced Xen in RHEL 5.4.

What i have seen so far has been pretty interesting, but I have barely scratched the surface of KVM’s inner workings.

Anyway, to clone a virtual machine you need to run the virt-clone command to clone the virtual disk. This automatically creates the virtual machine for you and registers it in virt-manager.

#virt-clone –original vm01 –name vm02 –file /vol0/vm02.img

Note that you do not need to tell virt-clone the location of the existing virtual disk. It figures this out on its own based on the name of the existing virtual machine.

ESXi 4.1 – Three Ways to Kill a Virtual Machine from the Command Line.

Kill-buttonYou can log into an esxi 4.1 host and issue the following command to get a list of all running and registered VMs.

# vm-support -x

The vm-support command above will give you the World ID of the VM that you are looking to kill.

# vm-support -X <World_ID> –w <dir>

Another way to kill or shutdown a VM is with the vim-cmd command. Use the command below to find your vmid

vim-cmd vmsvc/getallvms\

Then kill it with…

vim-cmd vmsvc/power.off <vmid>

The nastiest and uncleanest method is to actually kill the unix PPID (parent process id) of the VM. Take a look at the command below and subsitute your VM’s name. The PPID is in bold.

# ps | grep  <my_vm_name>
10071899      vmm0:<my_vm_name>
10051474 10071896 mks:<my_vm_name>   /bin/vmx
10084243 10071896 vcpu-0:<my_vm_name> /bin/vmx

Reference here

Redhat I/O Scheduler Configuration in a Virtual Machine.

Dali-melting-time-wall-clock The Linux kernel controls disk I/O scheduling, and is responsible for its optimization. One of the ways that it does this is via an I/O elevator to reorder and schedule pending I/O requests in order to minimize the time spent moving the
disk head. This reduction in movement results in a reduction in disk seek time which in turn,  maximizes hard
disk throughput.

However, because of the fact that ESX is its own built in elevator, the Linux kernel’s elevator is not needed, and in fact can hurt disk performance. So its best to disable the elevator inside your linux VMs.

One way to do this is to edit your /etc/grub.conf and add “elevator=noop” as seen below. This will require a reboot to take affect.

title CentOS (2.6.18-53.el5)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.18-53.el5 ro root=LABEL=/ noapic acpi=off rhgb quiet notsc elevator=noop
        initrd /boot/initrd-2.6.18-53.el5.img

To make the change on the fly to a running Linux VM run the command below, where device is your boot disk

echo noop > /sys/block/DEVICE/queue/scheduler

Additional Information:

http://www.redhat.com/magazine/008jun05/features/schedulers/

http://kbase.redhat.com/faq/docs/DOC-7539

http://lonesysadmin.net/2008/02/21/elevatornoop/

http://www.redhat.com/docs/wp/performancetuning/iotuning/iosubsystem-scheduler-selection.html

http://www.linuxjournal.com/article/6931