What’s Coming in OpenStack Networking for the Kilo Release

Originally posted on Red Hat Stack:

KiloOpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue”…

View original 2,596 more words

Introduction to the New RHEL OSP 7 Installer – RHEL OSP- director

RedHat-Openstack-Logo

The downstream version of RDO – Manager, which is the current installer that is shipped with the current release of RDO, will be called RHEL OSP director when it is released as part of RHEL OSP 7 (Openstack Kilo).

OSP- director (note the lowercase d in director) will not only install RHEL OSP 7, it will also handle updating and upgrading as well as lifecycle management.

These three additional functionalities are not currently available in RHEL OSP 6 – Foreman/Staypuft installer.

Below is a nice demo of the upstream version of the installer

How to Recover an Out of Sync Mariadb Galera OpenStack Database cluster

MariaDB-Seal-with-Text

Introduction

This process can be used whenever your databases are out of sync. For example when someone without thinking reboots all nodes in a cluster without shutting down the databases first.

Resolution

Place all cluster hosts into standby mode and cleanup any failed resources.

I sugguest making a back-up of Mariadb on each controller nodes – just in case.

root@controler3#  mysqldump --all-databases > mariadb_dump_06152015
root@controller3 # pcs cluster standby --all
root@controller3 # pcs status
root@controller3 # pcs resource cleanup ${resource}

Then on each Controller Node – verify that mariadb (mysql) has stopped. If any instance has not stopped properly via PCS as shown below, please stop them manually.

# systemctl status mariadb.service
mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled)
   Active: inactive (dead)

Find the latest (i.e – largest) version of the DB (seqno), or choose a node if all have the same version. In my instance, controller3 had the largest/highest version number, so we will be doing most of our recovery work on the instance below.

root@controller3 # cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid:    2fb3bbe0-eed6-11e4-ac79-4b77a337d598
seqno:   12175047
cert_index:

Stop puppet if it is running and edit the /etc/my.cnf.d/galera.cnf file. If you are not running puppet, or if puppet is not managing your Galera cluster you can skip this step and go ahead and edit the file.

root@controller3 # systemctl stop puppet

Set wsrep_cluster_address manually in /etc/my.cnf.d/galera.cnf making a note of the original value and restart mariadb manually.

For example, your default configuration should look something like what is shown below. Each IP address listed is the IP address of a mariadb instance in your cluster.

# Group communication system handle
wsrep_cluster_address="gcomm://172.17.9.23,172.17.9.24,172.17.9.22"

Modify to this by commenting out the default string and adding the string shown in the example below. Note that in this example we are working on controller3.

# Group communication system handle
#wsrep_cluster_address="gcomm://172.17.9.23,172.17.9.24,172.17.9.22"
wsrep_cluster_address="gcomm://"

Now restart Mariadb manually on the controller3.

root@controller3 # systemctl start mariadb

Now start mariadb on one of the remaining controllers

root@controller2 # systemctl start mariadb

Below you can see the newly started Maridb instance as requested to sync

Jun 15 10:08:02 controller2 mysqld_safe[24999]: 150615 10:08:02 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
Jun 15 10:08:02 controller2 mysqld_safe[24999]: 150615 10:08:02 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Jun 15 10:08:02 controller2 mysqld_safe[24999]: 150615 10:08:02 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsr...ver.pid'
Jun 15 10:08:04 controller2 mysqld_safe[24999]: 150615 10:08:04 mysqld_safe WSREP: Recovered position 2fb3bbe0-eed6-11e4-ac79-4b77a337d598:12175047
Jun 15 10:08:06 controller2 systemd[1]: Started MariaDB database server.


Once running you can unstandby the nodes and monitor the status as the remaining resources are loaded.

root@rcontroller3 # pcs cluster unstandby --all
root@node # pcs status

Finally, correct the value of wsrep_cluster_address to it’s original value in /etc/my.cnf.d/galera.cnf and restart the service, monitoring that the resource remains active.

root@controller3 # vi /etc/my.cnf.d/galera.cnf
root@controller3 # systemctl restart mariadb
root@controller3 # pcs status

Now check your databases to make sure that they are in sync. The following file should be the same on each controller.

[root@lppcldiuctl01 ~(openstack_admin)]# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid:    2fb3bbe0-eed6-11e4-ac79-4b77a337d598
seqno:   -1
cert_index:

Reference:
https://access.redhat.com/solutions/1283463

OpenStack: How to Remove RabbitMQ Durable Queues

hero_thumb1Introduction

This error you can see in multiple places, one being the logs for rabbitmq. The second being /var/log/cinder/volume.log

The tail statement below is very helpful for finding errors in OpenStack.

tail -fn0 /var/log/{nova,cinder,glance}/*.log | egrep 'ERROR|TRACE|WARNING'

Documented Errors:

EXAMPLE 1:

oslo.messaging._drivers.impl_rabbit PreconditionFailed: Exchange.declare: (406) PRECONDITION_FAILED - cannot redeclare exchange 'openstack' in vhost '/' with different type, durable, internal or autodelete value

EXAMPLE 2:

2015-06-08 09:52:17.367 8437 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to declare consumer for topic 'cinder-scheduler': Queue.declare: (406) PRECONDITION_FAILED - parameters for queue 'cinder-scheduler' in vhost '/' not equivalent

Warning

Note that you need to make sure that you are only working with OpenStack specific queues.

Cause

This is caused by a mismatch in the durable queues setting in the your coniguration files on the controller nodes. You cannot just change this setting to correct the issue, you must manually delete the affected queues.

Resolution

# pcs resource unmanage mysqld
# pcs resource unmanage rabbitmq-server
# pcs cluster standby --all
# curl http://localhost:15672/cli/rabbitmqadmin > rabbitmqadmin
# chmod +x rabbitmqadmin
# ./rabbitmqadmin help subcommands
# rabbitmqctl list_queues

The comand above will give you a list of all the queues, so now you can search for the affected queue and delete it as shown. In this example I am deleting the queue “notifications.info

[root@lppcldiuctl01 ~(openstack_admin)]# ./rabbitmqadmin --username=openstack --password=81d86697132a45a55 delete queue name=notifications.info
queue deleted

If you have multiple queues affected you can run through several at a time as shown below.

# rabbitmqctl list_queues | awk '{print $1}' > queues
# vim queues
# cat /etc/rabbitmq/rabbitmq.config
# ./rabbitmqadmin --username=openstack 
--password=81d86697132a45d9832d7fb35d168a55 delete queue 
name=reply_ea13ffaf100de1baca
# for q in $(<queues); do ./rabbitmqadmin --username=openstack 
--password=81d8669713 delete queue name=$q; done

Now bring everything back online

# pcs cluster unstandby --all

Now re-managed the services that you unmanaged above.

# pcs resource manage mysqld
# pcs resource manage rabbitmq-server

Note that the documentation above can be modified to delete exchanges as well as queues. More information on exchanges below.

https://www.rabbitmq.com/tutorials/amqp-concepts.html

A Simple How To Guide for TMUX

brightknight-tux-hatches-3796TMUX is a terminal multiplexer, a command line tool for enabling time travel, and a utility that can be utilized for safe trans-dimensional travel. Note however, that only the first part of that last sentence is actually true.

The purpose of this post is not to teach you (or me) how to be come TMUX experts, rather we are just going to learn how to do a few simple, yet very useful things that I feel are the best features of TMUX

Creating your TMUX Config File

First you need to install tmux via your favorite package installer. Next you can drop this .tmux.conf profile in your home directory. Don’t forget to source it. This file was given to me by a fellow co-worker and seems to make tmux pretty and functional.

This .tmux.conf sets your bind-key to “ctrl-A”. From here on out we will refer to this as our bind-key


#.tmux.conf in home directory

set -g prefix C-a

#bind C-c run "tmux show-buffer | xclip -i -selection clipboard"
bind C-c run "tmux show-buffer | xclip -i -selection primary"

bind-key C-a last-window
bind-key a send-key C-a
#bind-key M-a send-prefix

bind-key h select-pane -L
bind-key ^H select-pane -L
bind-key k select-pane -U
bind-key ^K select-pane -U
bind-key j select-pane -D
bind-key ^J select-pane -D
bind-key l select-pane -R
bind-key ^L select-pane -R

bind-key J resize-pane -D 5
bind-key K resize-pane -U 5
bind-key H resize-pane -L 5
bind-key L resize-pane -R 5

bind-key S setw synchronize-panes

#set-option -g lock-command '/usr/bin/vlock'
set-option -g lock-after-time 300
bind-key X lock-client

# vi ftw
set-window-option -g mode-keys vi

# bind-key N new-session
# Enhanced new-session: set session name
bind-key N command-prompt -p name: "new-session -s %1"

#### COLOR (Solarized dark)
# default statusbar colors
set-option -g status-bg black #base02
set-option -g status-fg yellow #yellow
set-option -g status-attr default

# default window title colors
#set-window-option -g window-status-fg brightblue #base0
#set-window-option -g window-status-bg default
#set-window-option -g window-status-attr dim

# active window title colors
#set-window-option -g window-status-current-fg brightred #orange
#set-window-option -g window-status-current-bg default
#set-window-option -g window-status-current-attr bright

# pane border
#set-option -g pane-border-fg black #base02
#set-option -g pane-active-border-fg brightgreen #base01

# message text
#set-option -g message-bg black #base02
#set-option -g message-fg brightred #orange

# pane number display
#set-option -g display-panes-active-colour blue #blue
#set-option -g display-panes-colour brightred #orange

# clock
#set-window-option -g clock-mode-colour green #green

Split Window Panes

This is my favorite feature of TMUX. Need to monitor several logs at once across multiple machines without having to switch back and forth between tabs or separate terminal windows, then you are in luck. See the screenshot below for an example of this. Here I have three separate windows.


tmux

To create a virtual split run the command below.

Run the command below to split a screen vertically.

<bind-key> %

Run the command below to split a screen horizontally

<bind-key> “

Now that you have created a bunch of windows/panes you now need to move between them. Use the commands below to accomplish this.

bind-key <UP ARROW>
bind-key <DOWN ARROW>
bind-key <RIGHT ARROW>
bind-key <LEFT ARROW>

To synchronize panes run the command below. For example, lets say I jump between the three panes in my example image above and ssh to separate servers. However I then want to “tail -f /var/log/secure” on each server, but I do not want to type the command three times. Instead I sync my frames, and run the command once. The command below accomplishes this.
bind-key <S>

To unsyncronize, run the command below.

bind-key <s>

Session Management

TMUX operates very similarly to Screen when it comes to session management.

The command below will create a new session called test_session.

$ tmux new -s test_session

To detach from the test session that you just created use the command below

$ tmux detach d

Now lets re-attach to our test session.

$ tmux attach -t test_session

Ok, now after reattaching from our test session we create another session called test_session_2. See below.

$ tmux new -s test_session_2

Now lets detach from our currently active session. Same detach command as shown a few steps above

$ tmux detach d

Now lets list all our open TMUX sessions.

$ tmux list-sessions
test_session: 1 windows (created Wed May 20 13:28:34 2015) [125×33]
test_session_2: 1 windows (created Wed May 20 13:33:35 2015) [125×33]

Finally we can switch between sessions with the switch command.

$ tmux switch -t test_session

Oh, lets not forget, using the following command we can kill a session with the command below.

$ tmux kill-session -t  test_session

Also note that you can detach and close an active session with <ctl> d.

OpenStack Nova: Get NoVNC Connection Info From the Command Line

vampirebatTo access a novnc console in OpenStack without logging into the Horizon WebUi, follow the steps below.

First and foremost you need to find your virtual instance. In this example I am looking for one of my test instances.

# nova list | grep test

The command above spews out a lot of information as I have many test instances. However this is the one that I am looking for specifically.

2a5481e6-9210-4353-af8f-db49e052d5c6 | fatmin-test-1         | ACTIVE  | –          | Running     | fatmin_network=10.0.10.1

Now grab the UUID from the first field above.

# nova get-vnc-console 2a5481e6-9210-4353-af8f-db49e052d5c6 novnc

This will spit out a URL as shown below.

+——-+———————————————————————————–+
| Type | Url |
+——-+———————————————————————————–+
| novnc | http://10.21.148.95:6080/vnc_auto.html?token=f27af916-94a0-4a96-85fd-7b74b0748ce1 |
+——-+———————————————————————————–+

Now copy and paste this into a browser window to access the console for your instance.