OpenStack Staging-Ovirt Driver: global name \’sdk\’ is not defined

python-snek

Getting Started

The staging-ovirt driver allows OpenStack to easily use ovirt/RHV virtual machines as overcloud nodes.   For those of us running virtualized OpenStack labs, it’s a huge step forward – as we either were previously having to hack our way around pxe_ssh or vmbc. Neither was a great solution.

In order to use the staging-ovirt driver , I first I needed to configure the undercloud to use the staging-ovirt driver. See undercloud.conf below.


[DEFAULT]
local_ip = 10.1.98.2/24
undercloud_public_vip = 10.1.98.3
undercloud_admin_vip = 10.1.98.4
local_interface = eth1
masquerade_network = 10.1.98.0/24
dhcp_start = 10.1.98.100
dhcp_end = 10.1.98.120
network_cidr = 10.1.98.0/24
network_gateway = 10.1.98.2
inspection_iprange = 10.1.98.130,10.1.98.150
inspection_runbench = false
undercloud_debug = false
store_events = false
enabled_hardware_types = staging-ovirt
inspection_enable_uefi = false

view raw

undercloud.conf

hosted with ❤ by GitHub

Then create an instackenv.json.  In the example below pm_addr is the IP of my local RHV manager.


"arch": "x86_64",
"cpu": "1",
"disk": "10",
"mac": [
"00:1a:4a:16:01:5a"
],
"memory": "1024",
"name": "ospd13-ctrl01",
"pm_addr": "10.1.99.10",
"pm_password": "redhat",
"pm_type": "staging-ovirt",
"pm_user": "admin@internal",
"pm_vm_name": "ospd13-ctrl01",
"capabilities": "profile:control,boot_option:local"

view raw

gistfile1.txt

hosted with ❤ by GitHub

You should then be able to import your nodes.

[simterm]
$ openstack overcloud node import instackenv.json
[/simterm]

Troubleshooting

Note that I ran into an error importing my nodes. Error shown below.

[{u’result’: u’Node 09dfefec-e5c3-42c4-93d0-45fb44ce37a8 did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 09dfefec-e5c3-42c4-93d0-45fb44ce37a8. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 59dce2eb-3aea-41f9-aec2-3f13deece30b did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 59dce2eb-3aea-41f9-aec2-3f13deece30b. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 0895a6d0-f934-44d0-9c26-25e61b6679cb did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 0895a6d0-f934-44d0-9c26-25e61b6679cb. Error: global name \’sdk\’ is not defined’}, {u’result’: u’Node 68bdf1cb-fe1f-48ab-b96d-fb5edaf17154 did not reach state “manageable”, the state is “enroll”, error: Failed to get power state for node 68bdf1cb-fe1f-48ab-b96d-fb5edaf17154. Error: global name \’sdk\’ is not defined’}]

Help was found here.

Apparently I was missing a package. I needed to yum install the package shown below and restart ironic-conductor

[simterm]
# sudo yum -y install python-ovirt-engine-sdk4.x86_64
# sudo systemctl restart openstack-ironic-conductor.service
[/simterm]

OpenStack: Mapping Ironic Hostnames to Nova Hostnames

Ironic_mascot_color

The Hostname Problem

When deploying OpenStack via Red Hat OSP director you configure the hostname of your baremetal (ironic) nodes at time of import. This is done via json file, by default named instack-env.json (but often re-named, nodes.json). Below is an excerpt from that file.

{
“nodes” :  [
{
“arch”: “x86_64”,
“cpu”: “4”,
“disk”: “40”,
“mac”: [
“58:8a:5a:e6:c0:40”
],
“memory”: “6144”,
“name”: “fatmin-ctrl0”,
“pm_addr”: “10.10.1.100”,
“pm_password”: “Mix-A-Lot”,
“pm_type”: “pxe_ipmitool”,
“pm_user”: “sir”
}

 

In the sample instance above, I am importing a node named, “fatmin-ctrl01”. This will be the server name as it appears in Ironic.  When heat deploys the overcloud, this node will by default be renamed overcloud-controller0, and any controller nodes will iterate by 1. Same situation for compute nodes.

What is preferable is to configure what is referred to as “Predictable Hostnames”. Using “Predictable Hostnames” we can do one of two things.

  1. Specify the hostname format to use and allow nova to iterate through nodes on its own.
  2. Specify the exact hostname for nova to use for each baremetal node

Nova Scheduler Hints

Before we can use either of the two options above, we must first update each baremetal node with a nova scheduler hint. In the examples below we are tagging one node to build as controller-0 (overcloud-controller0) and one node to build as (overcloud-compute-0).

For Controllers: Repeat for each controller

# ironic node-update <id> replace properties/capabilities=”node:controller-0,boot_option:local”

For Compute Node: Repeat for each compute node

# ironic node-update <id> replace properties/capabilities=”node:compute-0,boot_option:local”

You will then need to set your nova hints

parameter_defaults:
ControllerSchedulerHints:
‘capabilities:node’: ‘controller-%index%’
ComputeSchedulerHints:
‘capabilities:node’: ‘compute-%index%’

FYI – the same process can be used for the following hostname types

  • ControllerSchedulerHints
  • ComputeSchedulerHints
  • BlockStorageSchedulerHints
  • ObjectStorageSchedulerHints
  • CephStorageSchedulerHints

Custom Nova Hostname Format

Referring to option 1 shown above, we can set a specific format to be used for hostnames instead of the default.

 ControllerHostnameFormat: ‘fatmin-controller-%index%’
ComputeHostnameFormat: ‘fatmin-compute-%index%’

Using the method above the first compute node will be names fatmin-controller-01, and the first compute node will be names fatmin-compute-01. Additional nodes will iterate the index.

While this is nice, as it allows us to set a customized hostname format  for each type of node, it does not allow us to specify the exact hostname to be used for a specific ironic node.  We can do that will the HostnameMap.

HostnameMap

Now you may want to take this a bit further. You may want to use a custom nova name for each node compute/controller node. You can accomplish this using a HostnameMap as shown below.

HostnameMap:
overcloud-controller-0: fatmin-controller-0
overcloud-controller-1: fatmin-controller-1
overcloud-controller-2: fatmin-controller-2
overcloud-compute-0: fatmin-compute-0

 

Note, when specifying the flavor profiles in the deploy command for preassigned nodes, they should be specified as ‘baremetal‘ instead of ‘control‘ and ‘compute‘. This means that you will not have to assign a profile to each host. You will let the nova scheduler hints handle the decision

–control-flavor baremetal \
–compute-flavor baremetal \

So at this point – we will be able to allign the compute or controller index in ironic, with the index in Ironic. For example you can now map your ironic-node name (for example) fatmin-ctrl0 to fatmin-controller0.

Special Notes for Special People

  1. I do not suggest setting the nova name to the exactly the same name that you defined for the ironic name. While the indexes should match, the name formats should vary enough that you can easily tell if you are looking at a nova name or an ironic name.
  2. The use of HostnameMap will easily facilitate the replacement of a failed node so that you can provision the new node with the same nova name that was used by the original node before its premature death. Otherwise, nova will blacklist the nova name of the failed node. For example if controller0 dies and you need to replace and redeploy it, it will end up being named controller4 since this is the next number in the index.

 

 

OpenStack: instackenv.json Format Example

stack-example

Here is a quick and dirty example of the format of your instackenv.json. This is the file that Ironic uses to import nodes.

Enter your IPMI user id under “pm_user”

Enter your IPMI password under “pm_password”

 

[code language=”css”]
{
"nodes":[
{
"mac":[
"74:E6:E2:FB:71:B0"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"name":"control01",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"admin",
"pm_addr":"10.75.99.120"
},
{
"mac":[
"74:E6:E2:FB:71:D6"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"name":"control02",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"admin",
"pm_addr":"10.75.99.119"
},
{
"mac":[
"74:E6:E2:FB:73:D0"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"name":"control03",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"admin",
"pm_addr":"10.75.99.118"
},
{
"mac":[
"74:E6:E2:FB:27:D4"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"name":"compute01",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"admin",
"pm_addr":"10.75.99.117"
},
]
}
[/code]

OpenStack Ironic: Manual Node Tagging and Mapping

RedHat-Openstack-Logo

Some of the OpenStack director guides that I have seen and read suggest using ACH Tools to perform role matching, which matches Ironic nodes to the most appropriate roles (flavors) bases on their CPU, Memory, and Disk configurations.

However if you prefer to manually assign a role to a node follow the steps outlined below.

Creating Flavors

Before we can manually assign a role to a flavor, we must first create the flavor. In this example we are going to create 4 flavors.

First lets create the flavor that will be used for our OpenStack Overcloud Controllers.

# openstack flavor create –id auto –ram 8192 –disk 26 –vcpus 4 control
+—————————-+————————————–+
| Field | Value |
+—————————-+————————————–+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 26 |
| id | 4007f8a4-6ea9-400e-9142-6926f0538b29 |
| name | control |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+—————————-+————————————–+

Now lets create the flavor that will be used for our OpenStack Overcloud Compute Nodes.

# openstack flavor create –id auto –ram 6144 –disk 18 –vcpus 2 compute
+—————————-+————————————–+
| Field | Value |
+—————————-+————————————–+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 18 |
| id | 60d05417-d2ba-4f62-a37c-1bafbfaec30a |
| name | compute |
| os-flavor-access:is_public | True |
| ram | 6144 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+—————————-+————————————–+

Since our POC deployment will also include CEPH nodes, we need to also create a CEPH role.

# openstack flavor create –id auto –ram 4096 –disk 18 –vcpus 2 ceph
+—————————-+————————————–+
| Field | Value |
+—————————-+————————————–+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 18 |
| id | 5480e8e6-f895-4870-b4dc-e5fff5a32911 |
| name | ceph |
| os-flavor-access:is_public | True |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+—————————-+————————————–+

Important Note: You need to have a baremetal role created otherwise your deployment will fail, use the command below to create it.

# openstack flavor create –id auto –ram 8192 –disk 58 –vcpus 4 baremetal

Now let’s check our work with the command below. You should see your 4 roles listed (Baremetal, Ceph, Control, and Compute).

#openstack flavor list

Continue reading

OpenStack Ironic – CLI Cheat Sheet

home-iron

Ironic is the baremetal provisioning component of OpenStack. Ironic interacts with hardware through a set of plug-ins. Technologies leveraged are PXE and IPMI as well as a few vendor specific plug-ins.

Here are a few OpenStack Ironic commands that I have found useful when troubleshooting.

List nodes registered with Ironic.

$ ironic node-list

 

Show details of a specific node using the node UUID – your node UUID will differ.

$ ironic node-show 5492ee96-218f-4e0e-b5b3-4931ba2c9ede

 

Disable maintenance mode for a specific node.

$ ironic node-set-maintenance 5492ee96-218f-4e0e-b5b3-4931ba2c9ede off

 

The command below is useful for watching logs. Note…

  • -u is used to specify the unit
  • -f is used to tail logs

$sudo journalctl -u openstack-ironic-conductor -u openstack-ironic-api -f

 

Make a node available for scheduling by nova. Below is an example using a UUID in my environment.

$ ironic node-set-provision-state 5492ee96-218f-4e0e-b5b3-4931ba2c9ede provide

 

Show Mac address for a particular ironic node – uses UUID.

$ ironic node-port-list 5492ee96-218f-4e0e-b5b3-4931ba2c9ede

More examples to come.

Delete a node – using UUID

$ironic node-set-provision-state f7f58a13-1ac8-4d3a-b1b8-e7a02a07a323 deleted

Force Delete a Node from the Database – Use this option when your node is stuck in the “deleting” Provisioning State.

Log into mysql database

[stack@undercloud] # sudo mysql -u root

Switch to the ironic database

MariaDB [(none)]> use ironic;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Force state change using UUID of Ironic Node.

MariaDB [ironic]> UPDATE nodes SET provision_state=”available”, target_provision_state=NULL WHERE uuid=”f7f58a13-1ac8-4d3a-b1b8-e7a02a07a323″;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0

MariaDB [ironic]> exit
Bye

 

OpenStack Ironic : JSON Validator

12340anvil

This handy tool, developed by an engineer at Red Hat, checks your Ironic JSON file for errors such as missing passwords, or non-unique MAC addresses. The script also tests connections to your bare metal (or virtual nodes in my test lab) to ensure that they are accessible.

Learn more via the link below:

https://github.com/rthallisey/clapper

Or download the script directly via this link.

Usage, at least in my case, is shown below. Here, I am running the script against my instackenv.json file.

[code language=”css”]
# python instackenv-validator.py -f instackenv.json
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
DEBUG:__main__:Baremetal IPs are all unique.
DEBUG:__main__:MAC addresses are all unique.

——————–
SUCCESS: instackenv validator found 0 errors
[/code]