In this lab, we are going to deploy a functional, yet simple, Overcloud via the Tripleo WebUI using Virtual Machines. Our test deployment will consist of three Overcloud Controller Nodes (configured with HA) and one Overcloud Compute Node.
Hypervisor Details
Interface | IP Address | Interface | IP Address | Interface | IP Address |
em1 |
10.13.32.31 |
virbr0 |
192.168.122.1 |
virbr1 |
192.168.122.253 |
Undercloud VM Details
Interface | IP Address | Interface | IP Address |
brct-plane |
172.16.0.1 |
eth1 |
192.168.122.253 |
Prerequisites
Note that we have installed squid on our hypervisor node and are using that node to proxy all web traffic to the undercloud controller. This also requires configuring your web browser to use the hypervisor as its proxy.
Logging into the WebUI
We first need to determine the Admin password for the Tripleo WebUI. We can determine this using the following command.
[stack@undercloud ~]$ sudo hiera admin_password
Now we navigate to the WebUI and log in as Admin using the password above.
Once we log in we are greeted by the screen below.
Create a Plan
Select “Create New Plan“. Enter the “Plan Name“. Select “Tar Archive (tar.gz)” and select the tarball that you have created of your templates directory. Then select “Upload Files and Create Plan”
Note, that you can also upload an entire directory (using Chrome only).
Once complete, your new plan will appear as shown below.
Register Nodes
Now, we need to register our nodes
Select the “Nodes” tab, and then select the blue “Register Nodes” button.
Select “Upload From File” and select your instackenv.json. The Node Details should pre-fill as shown in the image below.
Now, select “Register Nodes“.
Wait while the nodes are registered. Once the process has completed, select the nodes to be used for your deployment.
In this lab we will deploy using the following nodes:
- overcloud-ctrl01
- overcloud-ctrl02
- overcloud-ctrl03
- overcloud-compute01
Select the nodes by clicking on the check-box on the left, then select “Introspect Nodes“. See screenshot below.
See, post-introspection screenshot below. Now select your introspected nodes, and select “Provide Nodes” to set them to a provisionable state.
Specify Deployment Configuration
From the “Deployment Plan” tab, select “Edit Configuration” under “Specify Deployment Configuration“. See screenshot below.
A new window labeled “Deployment Configuration“, will appear. This window has two tabs; “Overall Settings” and “Parameters“.
Overall Settings Tab
Under the tab, “Overall Settings” you can configure items such as network isolation, storage, fluentd logging, High Availability, and deploy Neutron Plugins (BSN, Cisco N1KV, OpenDaylight).
As previously mentioned, we are going to deploy three Overcloud Controller Nodes (using HA).
Under “Overall Settings“, Select “Deployment Options” in the left sidebar. Check the “Pacemaker” box under “High Availability“. See screenshot below.
Parameters Tab
This tab allows you to set parameters specific to the deployment of your overcloud. Set “ComputeCount =1” and “ControllerCount =3“. See screenshot below.
Once complete, select “Save Changes“.
Configure Flavors
We now need to assign a flavor to each role. Note that in this lab, we are only using two flavors, Compute and Control.
Configure Roles and Assign Nodes
First, we will associate the Controller role with the Control Flavor. Click on the blue pencil to edit the Controller Flavor. Under “Controller Role” scroll down until you see the field labeled “OvercloudControlFlavor“. Change “baremetal” to “control“. Then select “Save Changes“. See below.
Now, assign the “compute” flavor to the “Compute Role“. Follow the same process as you used to assign the Controller flavor.
Select “Save Changes“.
Assigning Nodes
Now we need to assign nodes to each role.
Starting with Controller role, click on “Assign Nodes“. Select the three controller nodes, select “Assign/Unassign Selected Nodes” then select “Done“. See screenshot below.
Now onto the Compute role. Click on “Assign Nodes“. Select the compute node, select “Assign/Unassign Selected Nodes” then select “Done“. See screenshot below.
Deploying
Once you have assigned three nodes to the “Controller” role and one node to the “Compute” node, it’s time to start our deploy.
Click on “Validate and Deploy”
You may see a screen similar to what is shown below. Since this is a lab deployment, and our failed validations are minor, we will proceed.
A progress screen will appear while the deploy is ongoing. Note, that there is a search box that you can use to search resources that have a status of “CREATE_IN_PROGRESS” or “FAILED“.
See example output below.
Note that the deploy can take a while, so please be patient.
Troubleshooting Deployment Failures
Note, that if your deployment fails, troubleshooting a failed deployment in the Tripleo UI/OSP 10 is much easier than you may be used to.
If your deployment fails, you should see a screen similar to what is shown below.
You can use the Tripleo UI to search for the specific failed resource as shown below.
For more detail on your failure, run the following command, which is new in Red Hat OSP 10, on the undercloud node.
$ openstack stack failures list overcloud
The output of which indicates an issue with galera/mysql. Now I just need to troubleshoot.
Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/
Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.16.0.34]/Mysql_grant[neutron@172.16.0.34/ovs_neutron.*]: Dependency Exec[galera-ready] has failures: true
Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Dependency Exec[galera-ready] has failures: true
Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Dependency Exec[galera-ready] has failures: true
Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Dependency Exec[galera-ready] has failures: true
Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Dependency Exec[galera-ready] has failures: true
Notice: /Firewall[998 log all]: Dependency Exec[galera-ready] has failures: true
Notice: /Firewall[999 drop all]: Dependency Exec[galera-ready] has failures: true
Hi!
Did you solved that problem? I’m facing a very similar one in the same scenario (I’m using RH Director based on TripleO).
Thanks!
Seb
Same here, please, post your solution if you have resolved this.
After deployment, I assume you will have to ssh heat-admin@api-node into the overcloud controller nodes to configure cinder and swift to use an iSCSI disk/block devices?
What happens after all your changes, then you add another node, or upgrade to next openstack major release. Will the current and new nodes retain those changes you manually made?
Should we keep the ‘nic1, nic2’ unchanged, or should we replace it with eth0, p1p1 etc?
My problem is about the network, I have 3 nic, 1 for public, 1 for DMZ, 1 for iSCSI/private network/heartbeat network. After deployed successfully from UI, I’m lost…now what?
Thanks
Eric.
You should be able to configure cinder and swift customizing your heat templates before you upload them into the UI.
Manual changes are not considered idempotent. You will need to figure out how to include your changes in a custom heat template that runs every time you scale or upgrade.
I suggest changing the NIC names to their actual names. See example below