OpenStack: Deleting Zombie Cinder Volumes and VMs

cinder-1

First off let me start by saying that the new Cinder logo is wonderful. Nothing helps me think of backend storage better than the backend of a horse.

In an environment I am working in, we have a large number of cinder volumes that are in error state, due to the backend storage being ripped out. The volumes were not deleted, nor were they detached from the VMs.

End result, you cannot delete the zombie VM (at it has an attached volume) and you cannot delete the zombie/orphaned volume (as it is attached to a VM).

The following process allows you to work around the chicken-and-egg scenario above.

First we get a list of all volumes in error state.

# openstack volume list –all | grep -i error

Then we take a closer look at the volume to see if it exists/existed on the backend that was removed.

# openstack volume show 05b372ef-ee45-499b-9676-72cc4170e1b3

First we check the backend to ensure it is the affected backend – in this case it is.

| os-vol-host-attr:host | hostgroup@dellsc#openstack_dellsc

We also check for any current attachments. Below we see that this volume is attached to a vm with the uuid shown below.

| attachments | [{u’server_id’: u’d142eb4b-823d-4abd-95a0-3b02a3194c9f’,

Now we reset the state of the volume, so that it is no longer in an error state

# cinder reset-state –state available 05b372ef-ee45-499b-9676-72cc4170e1b3

Now we detach the volume via cinder.

# cinder reset-state –attach-status detached 05b372ef-ee45-499b-9676-72cc4170e1b3

Now we are free to delete the volume

# openstack volume delete 05b372ef-ee45-499b-9676-72cc4170e1b3

Confirm volume deletion

# openstack volume show 05b372ef-ee45-499b-9676-72cc4170e1b3
No volume with a name or ID of ’05b372ef-ee45-499b-9676-72cc4170e1b3′ exists

Now we can delete the VM.

# openstack server delete d142eb4b-823d-4abd-95a0-3b02a3194c9f

And now we confirm its deletion.

#openstack server show d142eb4b-823d-4abd-95a0-3b02a3194c9f
No server with a name or ID of ‘d142eb4b-823d-4abd-95a0-3b02a3194c9f’ exists.

Advertisements

Forcibly Remove a Host From XenCenter via CLI

PicklePD3Welcome back to another XenServer adventure. Today we are going to review the process of removing an orphaned XenServer host from XenCenter via the command line.

A few days ago one of my Xen hosts died due to an issue with the raid controller. In order to revive it I had to rebuild it, and add it back into the pool. However there was a problem; the original entry for this host in XenCenter appeared in red, which indicated that it was disconnected, and another entry for the same exact host with the same exact hostname appeared next to it in the host list for this particular cluster. I was unable to remove the orphaned host via XenCenter as there was no remove option.

I was in a pickle.

However I was able to find a solution.

First and formost it appears that there are actually two ways to do this; the first one being from the Windows command line where you are running Xencenter, or on the master node in the cluster.

This example is from the Windows CLI. Note that the UUID that is output to the command line is the UUID of the Host that is unreachable — the orphaned host.


C:\Program Files\Citrix\XenCenter>xe -s <Hostname_of_master> -u root -pw <root_password> pool-sync-database
You attempted an operation which involves a host which could not be contacted.
host: 560e233b-0e8b-4c2f-a641-cf1876630a6b (virt04.atlc1)

C:\Program Files\Citrix\XenCenter>xe -s <Hostname_of_master> -u root -pw <root_password> host-foget uuid=560e233b-0e8b-4c2f-a641-cf1876630a6b
WARNING: A host should only be forgotten if it is physically unrecoverable;
WARNING: if possible, Hosts should be 'ejected' from the Pool instead.
WARNING: Once a host has been forgotten it will have to be re-installed.
WARNING: This operation is irreversible.
Type 'yes' to continue
yes

Returning to XenCenter I found that the orphaned host entry had been removed.