OpenStack: Mapping Ironic Hostnames to Nova Hostnames

Ironic_mascot_color

The Hostname Problem

When deploying OpenStack via Red Hat OSP director you configure the hostname of your baremetal (ironic) nodes at time of import. This is done via json file, by default named instack-env.json (but often re-named, nodes.json). Below is an excerpt from that file.

{
“nodes” :  [
{
“arch”: “x86_64”,
“cpu”: “4”,
“disk”: “40”,
“mac”: [
“58:8a:5a:e6:c0:40”
],
“memory”: “6144”,
“name”: “fatmin-ctrl0”,
“pm_addr”: “10.10.1.100”,
“pm_password”: “Mix-A-Lot”,
“pm_type”: “pxe_ipmitool”,
“pm_user”: “sir”
}

 

In the sample instance above, I am importing a node named, “fatmin-ctrl01”. This will be the server name as it appears in Ironic.  When heat deploys the overcloud, this node will by default be renamed overcloud-controller0, and any controller nodes will iterate by 1. Same situation for compute nodes.

What is preferable is to configure what is referred to as “Predictable Hostnames”. Using “Predictable Hostnames” we can do one of two things.

  1. Specify the hostname format to use and allow nova to iterate through nodes on its own.
  2. Specify the exact hostname for nova to use for each baremetal node

Nova Scheduler Hints

Before we can use either of the two options above, we must first update each baremetal node with a nova scheduler hint. In the examples below we are tagging one node to build as controller-0 (overcloud-controller0) and one node to build as (overcloud-compute-0).

For Controllers: Repeat for each controller

# ironic node-update <id> replace properties/capabilities=”node:controller-0,boot_option:local”

For Compute Node: Repeat for each compute node

# ironic node-update <id> replace properties/capabilities=”node:compute-0,boot_option:local”

You will then need to set your nova hints

parameter_defaults:
ControllerSchedulerHints:
‘capabilities:node’: ‘controller-%index%’
ComputeSchedulerHints:
‘capabilities:node’: ‘compute-%index%’

FYI – the same process can be used for the following hostname types

  • ControllerSchedulerHints
  • ComputeSchedulerHints
  • BlockStorageSchedulerHints
  • ObjectStorageSchedulerHints
  • CephStorageSchedulerHints

Custom Nova Hostname Format

Referring to option 1 shown above, we can set a specific format to be used for hostnames instead of the default.

 ControllerHostnameFormat: ‘fatmin-controller-%index%’
ComputeHostnameFormat: ‘fatmin-compute-%index%’

Using the method above the first compute node will be names fatmin-controller-01, and the first compute node will be names fatmin-compute-01. Additional nodes will iterate the index.

While this is nice, as it allows us to set a customized hostname format  for each type of node, it does not allow us to specify the exact hostname to be used for a specific ironic node.  We can do that will the HostnameMap.

HostnameMap

Now you may want to take this a bit further. You may want to use a custom nova name for each node compute/controller node. You can accomplish this using a HostnameMap as shown below.

HostnameMap:
overcloud-controller-0: fatmin-controller-0
overcloud-controller-1: fatmin-controller-1
overcloud-controller-2: fatmin-controller-2
overcloud-compute-0: fatmin-compute-0

 

Note, when specifying the flavor profiles in the deploy command for preassigned nodes, they should be specified as ‘baremetal‘ instead of ‘control‘ and ‘compute‘. This means that you will not have to assign a profile to each host. You will let the nova scheduler hints handle the decision

–control-flavor baremetal \
–compute-flavor baremetal \

So at this point – we will be able to allign the compute or controller index in ironic, with the index in Ironic. For example you can now map your ironic-node name (for example) fatmin-ctrl0 to fatmin-controller0.

Special Notes for Special People

  1. I do not suggest setting the nova name to the exactly the same name that you defined for the ironic name. While the indexes should match, the name formats should vary enough that you can easily tell if you are looking at a nova name or an ironic name.
  2. The use of HostnameMap will easily facilitate the replacement of a failed node so that you can provision the new node with the same nova name that was used by the original node before its premature death. Otherwise, nova will blacklist the nova name of the failed node. For example if controller0 dies and you need to replace and redeploy it, it will end up being named controller4 since this is the next number in the index.

 

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.