I needed a way to find and remove all snapshots quickly and easily from the ESXi 4 command line. Note that I am not a Windows user and would rather punch myself square in the face than use PowerShell.
So here it is… and let me warn you its really nasty
# for vm in `vim-cmd vmsvc/getallvms | awk '{ print $1 }' | grep -v Vmid` > do
> vim-cmd vmsvc/snapshot.removeall $vm > done
Note that this could actually take a really long time to run depending on the size of your snapshots.
Basically it finds the vmid of each and every vm running on the box and attempts to remove any snapshots (if they exist). Note that you will need to do this on every node in your cluster. I plan to write a better version of this script at somepoint that will search the datastores for snapshots and the derive the vmid from that, but not today.
First thing you need to know about ESXi is that it rolls its log after a reboot, meaning if your ESXi server crashes there will be no logs to review and no way of knowing what when to hell and where.
For this reason it is imperative that you setup a remote syslog server and send your logs there. Vmware support will tell you this and shame you if you have not setup remote syslogging.
Fortunately the first hit on google for “ESX syslog how to” will take you directly to the VMware KB article. You can find it here. However this page does not contain directions on how to do this via the Linux rcli, it only contains directions on how to do this via the VMA (management appliance) and from Windows PowerCLI. Really Vmware?
Follow the directions below if you run a real operating system and are not a Windows Admin.
First configure your remote ESXi host to forward syslog to your syslog server.
Great Balls of Frustration… could the documentation be anymore confusing and convoluted regarding the process of patching an ESX server via the Linux remote cli.
No, I am not running windows, and no I am not on the local ESX console, and no I am not running the Vmware Management Appliance, and most of all, no I do not want outdated instructions from 2008. I just want to download some patches to my local linux destop and patch my ESX servers. Specifically I want to install the latest and greatest Qlogic drivers to troubleshoot a SAN connectivity issue that I will probably blog about at a later date.
Anyway here is how I did what I needed to do without resorting to using a windows box.
First download the driver isos to your local Linux desktop and mount the isos up locally,
mount -o loop vmware-esx-drivers-scsi-fnic_400.1.4.0.201-1vmw.2.17.00000.491446.iso /mnt
Second locate the offline-bundle directory. I moved mine off to another directory that I created specifically for the Qlogic Driver. This way I dont have to unmount and remount the iso when i patch my other boxes.
Next put the host in maint mode.
vicfg-hostops –username root –server esx04 -o enter
Then navigate to the offline_bundle directory and run the following command, replace the bundle name that I used with the bundle name in your directory
Please wait patch installation is in progress …
The update completed successfully, but the system needs to be rebooted for the changes to be effective.
I would actually be suprised that I was not aware of this site previously, if I had not ever been to vmware.com and attempted to find a specific software in the past. Needless to say, its rather dificult to find exactly what you are looking for there.
So… apparently vmware calls the latest vmware tools, OSPs, or "Operating System Specific Packages"
The site is below… Navigate to the OS specific to your needs and go to town.
Interested in doing some serious virtualization on Cisco's new UCS C-Series line of servers? Seriously, why would'nt you be. Cisco's UCS C460 M2 server supports up to 4 intel E7-4800 processors — these are 10 core procs people — and up to 1tb of memory. Thats 40 cores in 4 RU!
I am eager to get one of these servers, or even one or the two proc models into our lab and test them out. I have been hearing great things about them and think they would be perfect virtualization servers for our non-production infrastructure where we seem to have at least 6 or 7 different QA environments. Plus Vmware and Cicso partnered together on these things.
But slow your roll cowboy, with Vmware's recent licensing announcement regarding Vsphere 5 and vRam limitations, it would take 11 Enterprise plus licenses to utilize 1tb of memory. At 3 to 4k per license you are talking about 40k in licensing for one server. Nasty!
Enter the alternatives in the virtualization world. Citrix ZenServer and Redhat (RHEV-M). Hell I'll even mention Microsoft's Hyper-V. However, to move to a different technology you need to be able to get your Vmware vms off of ESX and onto your new virtualization flavor of choice.
Here is how to do it in XenServer. Note that I have not tried this in the lab yet, but trust me i will. I will also do the same with RHEV-M, its its only $750 per socket per year.
Sure the alertnatives are not as advanced as Vmware ESX right now, but they making improvements slowly but surely, and I am sure that the recent Vmware licensing announcements are only going to bring more potential customers to other virtualization vendors which will only help to further mature their products.
Find out more about the Cicso USC C-Series via the link below
When you right click on a VM via the vSphere client and click edit setting you will see your hard disk enumerated as follows:
Hard disk 1 - Virtual Disk
Hard disk 2 - Virtual Disk
However CentOS/RHEL enumerates your disks using a different nomenclature. See below:
Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 1044 8385898+ 83 Linux /dev/sda2 1045 2349 10482412+ 83 Linux /dev/sda3 2350 2610 2096482+ 82 Linux swap
Disk /dev/sdb: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 1 32635 262140606 83 Linux
[[truncated]]
While you can assume that the your first disk listed in ESX is your first disk listed via the OS, this is not always the case. You can attempt to compare disk sizes to further determine which disk is which, however the sizes do not always match up.
The best way to identify disks is via the Virtual Device Node, which is visible via the virtual disk properties in ESX and via the command lsscsi in Centos/OS.
More than likely lsscsi is not installed via default so you are going to either need to install via yum, or download a rpm and install. Once installed run as seen below
>lsscsi [0:0:0:0] disk VMware Virtual disk 1.0 /dev/sda [0:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb [0:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc [0:0:3:0] disk VMware Virtual disk 1.0 /dev/sdd
The first two numbers are unique to the scsi controller and the second two numbers indicate the disk number which can be seen in the disk properties in the vsphere client properties.
For example in vmware Hard disk 4 has a virtual device id of “SCSI (0:3) Hard Disk 4, and by running lsscsi I know for sure that this disk is /dev/sdd. Now I can remove the disk with confidence from esx and have to worry that I removed the wrong disk.
In early February VMware release the first update to ESXi 4.1. ESXi 4.1 Update 1 includes additonal drivers, and OS support for RHEL 6 (among others). Also there are a number of bug fixes delivered like random failures on bnx2x nic (yikes). See the entire list below.