OpenStack: Deleting Zombie Cinder Volumes and VMs


First off let me start by saying that the new Cinder logo is wonderful. Nothing helps me think of backend storage better than the backend of a horse.

In an environment I am working in, we have a large number of cinder volumes that are in error state, due to the backend storage being ripped out. The volumes were not deleted, nor were they detached from the VMs.

End result, you cannot delete the zombie VM (at it has an attached volume) and you cannot delete the zombie/orphaned volume (as it is attached to a VM).

The following process allows you to work around the chicken-and-egg scenario above.

First we get a list of all volumes in error state.

# openstack volume list –all | grep -i error

Then we take a closer look at the volume to see if it exists/existed on the backend that was removed.

# openstack volume show 05b372ef-ee45-499b-9676-72cc4170e1b3

First we check the backend to ensure it is the affected backend – in this case it is.

| os-vol-host-attr:host | hostgroup@dellsc#openstack_dellsc

We also check for any current attachments. Below we see that this volume is attached to a vm with the uuid shown below.

| attachments | [{u’server_id’: u’d142eb4b-823d-4abd-95a0-3b02a3194c9f’,

Now we reset the state of the volume, so that it is no longer in an error state

# cinder reset-state –state available 05b372ef-ee45-499b-9676-72cc4170e1b3

Now we detach the volume via cinder.

# cinder reset-state –attach-status detached 05b372ef-ee45-499b-9676-72cc4170e1b3

Now we are free to delete the volume

# openstack volume delete 05b372ef-ee45-499b-9676-72cc4170e1b3

Confirm volume deletion

# openstack volume show 05b372ef-ee45-499b-9676-72cc4170e1b3
No volume with a name or ID of ’05b372ef-ee45-499b-9676-72cc4170e1b3′ exists

Now we can delete the VM.

# openstack server delete d142eb4b-823d-4abd-95a0-3b02a3194c9f

And now we confirm its deletion.

#openstack server show d142eb4b-823d-4abd-95a0-3b02a3194c9f
No server with a name or ID of ‘d142eb4b-823d-4abd-95a0-3b02a3194c9f’ exists.

PackStack: How to Create OpenStack Cinder-Volumes Manually

The PackStack installer for OpenStack is largely considered an installer for Test and Proof-of-Concept OpenStack environments. Being such, the PackStack Installer can automatically create a loopback device for use with Cinder if you allow it to. In my lab, I would actually rather create a separate volume for Cinder to use. Below are the steps needed to do so.

First you need to ensure that the PackStack installer is not going to try to create its loopback device. To accomplish this you must modify the Packstack Answers file. The example below shows where I have specified that PackStack should not create a volume on its own. Note that the default size for this volume would be 20gb. Keep this in mind should you want PackStack to create the Cinder Volume, as you are going to need a bit of free space.

# Create Cinder’s volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder. This will create a
# file-backed volume group and is not suitable for production usage.

# Cinder’s volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.

In my lab environment I have attached a secondary disk to RHEL 7 vm. This disk has been named /dev/sdb by the OS.

Note that other instructions that I have seen on this topic do not show or even recommend creating a partition on your Cinder volume disk before running pvcreate. I find that its best practice to always create a partition on a disk as this helps indicate to other root users that this disk is in use.

Next partition the disk and create one partition, type LVM

[root@rhelosp ~]# fdisk /dev/sdb

Device Boot Start End Blocks Id System
/dev/sdb1 2048 41943039 20970496 8e Linux LVM

Then run pvcreate to put your disk under the control of LVM

[root@rhelosp ~]# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created

Now create the volume group as shown below. Note that the volume name must be “cinder-volumes”

[root@rhelosp ~]# vgcreate cinder-volumes /dev/sdb1
Volume group “cinder-volumes” successfully created

Now you can proceed with your install.

OpenStack Cinder – Add Additional Backend Volumes


On this standalone host we have just added an additional disk, which has been enumerated as “/dev/vdc”. First lets go ahead and create a partition on this disk and label it with the type “lvm”.

[root@packstack cinder(keystone_admin)]# fdisk /dev/vdc

Now lets view the partition that we created. Note that our new disk is approx 21gb in size. Since we are working in our lab, we are not working with large disks.

[root@packstack cinder(keystone_admin)]# fdisk -l /dev/vdc1

Disk /dev/vdc1: 21.5 GB, 21473787904 bytes, 41940992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Currently we have an existing cinder-volume group on this node. This volume group uses the default name of “cinder-volume”.

[root@packstack cinder(keystone_admin)]# vgscan | grep cinder
Found volume group “cinder-volumes” using metadata type lvm2

Now lets run ‘pv-create’.

[root@packstack cinder(keystone_admin)]# pvcreate /dev/vdc1
Physical volume “/dev/vdc1” successfully created

And lets create a new volume group. Remember “cinder-volumes” is already in use, so let’s make sure that we choose a unique name for our new VG.

[root@packstack cinder(keystone_admin)]# vgcreate cinder-volumes-1 /dev/vdc1
Volume group “cinder-volumes-1” successfully created

Now we can add the following to the end of our cinder.conf


Now lets restart the cinder services.

[root@packstack cinder(keystone_admin)]# systemctl | grep cinder | awk ‘{print $1}’| xargs -tn1 systemctl restart
systemctl restart openstack-cinder-api.service
systemctl restart openstack-cinder-backup.service
systemctl restart openstack-cinder-scheduler.service
systemctl restart openstack-cinder-volume.service

Now lets make sure that cinder services are up and running.

[root@packstack cinder(keystone_admin)]# systemctl | grep cinder | awk ‘{print $1}’| xargs -tn1 systemctl status | grep Active
systemctl status openstack-cinder-api.service
Active: active (running) since Tue 2015-04-28 17:02:29 EDT; 1min 21s ago
systemctl status openstack-cinder-backup.service
Active: active (running) since Tue 2015-04-28 17:02:29 EDT; 1min 21s ago
systemctl status openstack-cinder-scheduler.service
Active: active (running) since Tue 2015-04-28 17:02:29 EDT; 1min 21s ago
systemctl status openstack-cinder-volume.service
Active: active (running) since Tue 2015-04-28 17:02:30 EDT; 1min 20s ago

Now we must declare the new volume as block storage using the commands below.

[root@packstack cinder(keystone_admin)]# cinder type-create lvm1
| ID | Name |
| 082fc208-9d38-4ad4-ab48-110d056a2856 | lvm1 |

Now lets link the backend volume to the newly created type-key.

[root@packstack cinder(keystone_admin)]# cinder type-key lvm1 set volume_backend_name=cinder-volumes-1

Below you can see our new cinder backend volume, and its backend name.

[root@packstack cinder(keystone_admin)]# cinder extra-specs-list
| ID | Name | extra_specs |
| 082fc208-9d38-4ad4-ab48-110d056a2856 | lvm1 | {u’volume_backend_name’: u’cinder-volumes-1′} |
| b775069f-7ecc-42e3-954a-553d5a5244a2 | iscsi | {u’volume_backend_name’: u’lvm’} |

Now lets create a new volume on the new backing volume.

[root@packstack cinder(keystone_admin)]# cinder create –volume_type lvm1 –display_name test_multi_backend 1
| Property | Value |
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-04-28T21:15:57.552565 |
| display_description | None |
| display_name | test_multi_backend |
| encrypted | False |
| id | 57b7c382-caa6-4c21-8a79-00c661140e3c |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | lvm1 |