Openstack Fundamentals Tutorial for Beginners

Per the publisher…

“The OpenStack tutorial for beginners begin with fundamentals & concepts, definition of cloud computing. In this openstack demo, the presenter provides a clear picture of infrastructure setup such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS). As you browse through the OpenStack, you can understand the operating features of it. This software is built with modular web-based user interface for all the Open Stack services. Since Open Stack is equipped with web GUI, you can perform most operations on a cloud like launching an instance, assigning IP addresses and setting access controls.

The company has opened up with many services such as OpenStack Quantum, and OpenStack Cinder, and OpenStack Swift. The presenter will provide with necessary information on Instance life cycle management, i.e. Run, reboot, suspend, resize and terminate instances. Management of compute resources, Management of Local Area Networks, Distributed and asynchronous architecture for massively scalable and highly available system, Virtual Machine (VM) image management, Floating IP addresses, Security Groups, Role Based Access Control (RBAC), Projects & Quotas.”

PackStack: How to Create OpenStack Cinder-Volumes Manually

a17f5-6a00e551c39e1c8834015436149671970c-pi
The PackStack installer for OpenStack is largely considered an installer for Test and Proof-of-Concept OpenStack environments. Being such, the PackStack Installer can automatically create a loopback device for use with Cinder if you allow it to. In my lab, I would actually rather create a separate volume for Cinder to use. Below are the steps needed to do so.

First you need to ensure that the PackStack installer is not going to try to create its loopback device. To accomplish this you must modify the Packstack Answers file. The example below shows where I have specified that PackStack should not create a volume on its own. Note that the default size for this volume would be 20gb. Keep this in mind should you want PackStack to create the Cinder Volume, as you are going to need a bit of free space.

# Create Cinder’s volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder. This will create a
# file-backed volume group and is not suitable for production usage.
CONFIG_CINDER_VOLUMES_CREATE=n

# Cinder’s volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.
CONFIG_CINDER_VOLUMES_SIZE=20G

In my lab environment I have attached a secondary disk to RHEL 7 vm. This disk has been named /dev/sdb by the OS.

Note that other instructions that I have seen on this topic do not show or even recommend creating a partition on your Cinder volume disk before running pvcreate. I find that its best practice to always create a partition on a disk as this helps indicate to other root users that this disk is in use.

Next partition the disk and create one partition, type LVM

[root@rhelosp ~]# fdisk /dev/sdb

Device Boot Start End Blocks Id System
/dev/sdb1 2048 41943039 20970496 8e Linux LVM

Then run pvcreate to put your disk under the control of LVM

[root@rhelosp ~]# pvcreate /dev/sdb1
Physical volume “/dev/sdb1″ successfully created

Now create the volume group as shown below. Note that the volume name must be “cinder-volumes”

[root@rhelosp ~]# vgcreate cinder-volumes /dev/sdb1
Volume group “cinder-volumes” successfully created

Now you can proceed with your install.

OpenStack Post Install Validation Checklist

smarta

Below is a “Post Install Validation Checklist” that I am working on for validating an OpenStack Deployment. This is a work in progress and is sure to be missing some items. I plan to attempt to treat this post as a live document an update as I find new items to add to the checklist

Component Task
All On each node (compute, controller) apply post-install configurations. Ensure ssh keys for root and nova exist on all hosts
All If deploying with Staypuft installer, ensure that puppet is stopped and disabled so that it does not overwrite post-install configurations
Glance Ensure that you can upload and configure OS Images
Nova Ensure that you can create flavors
Nova Ensure that you can successfully provision test instances. Test with each flavor and OS Image
Cinder Provision and assign a block device from Cinder. Also ensure that you can detach and delete a block device
Neutron Create and test tenant networks
Neutron Provision and assign a floating IP from an external network for each tenant network. Ensure that connectivity is successful
Nova Define and assign a security group to an instance, enabling at least SSH and ICMP access
All Reboot of all nodes to ensure configuration persistence, document any configuration changes. Push any fixes out to nodes when necessary
Nova Generate and test keypairs to be used for access to instances
Nova Ensure you are able to create and delete instance snapshots
Swift If using swift, verify that you can create object stor and upload or create files
Nova Test Live Migration by migrating instances across each compute node

Stack_Stat: A Simple OpenStack Service Checking Function

openstack

Looking for a simple way to see what OpenStack services are enabled, running, or dead on an OpenStack Compute or Controller Node, and also want to see when these services were started? Then check out this simple function that you can add to root’s .bashrc. Note that I do not take credit for writing this (did modify it a tiny bit, however), rather it is something a friend of mine passed on to me and I found it too handy not to document and share.

First append the following to /root/.bashrc.

##Stack Stat Function
function stack_stat {
services=$( systemctl list-unit-files | egrep '(openstack|neutron)' |awk '{print $1}')
for service in $services ;do
echo -n "$service: "
systemctl status $service |grep Active
done
}

What this function does is show you the status or “Active” (or enabled as you would traditionally say) OpenStack Services. Just run the following…

#stack_stat

You will see something similar to the output below, which shows you each enabled OpenStack service and its running status.

# stack_stat
neutron-dhcp-agent.service: Active: active (running) since Fri 2015-07-10 09:35:19 EDT; 3 days ago
neutron-l3-agent.service: Active: active (running) since Fri 2015-07-10 09:35:19 EDT; 3 days ago
neutron-lbaas-agent.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago
neutron-metadata-agent.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago
neutron-metering-agent.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago
neutron-netns-cleanup.service: Active: inactive (dead)
neutron-openvswitch-agent.service: Active: active (running) since Fri 2015-07-10 09:35:19 EDT; 3 days ago
neutron-ovs-cleanup.service: Active: active (exited) since Fri 2015-07-10 09:35:19 EDT; 3 days ago
neutron-server.service: Active: active (running) since Fri 2015-07-10 09:35:29 EDT; 3 days ago
openstack-ceilometer-alarm-evaluator.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago
openstack-ceilometer-alarm-notifier.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago
openstack-ceilometer-api.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago
openstack-ceilometer-central.service: Active: active (running) since Fri 2015-07-10 09:35:17 EDT; 3 days ago

Now compare the output above to the output of “openstack-service status” as shown below. Note that you do not get time and date stamps.

# openstack-service status
neutron-dhcp-agent (pid 6725) is active
neutron-l3-agent (pid 6724) is active
neutron-lbaas-agent (pid 3884) is active
neutron-metadata-agent (pid 3886) is active
neutron-metering-agent (pid 3882) is active
neutron-openvswitch-agent (pid 6726) is active
neutron-server (pid 3885) is active
openstack-ceilometer-alarm-evaluator (pid 3899) is active
openstack-ceilometer-alarm-notifier (pid 3897) is active
openstack-ceilometer-api (pid 3898) is active
openstack-ceilometer-central (pid 3895) is active
openstack-ceilometer-collector (pid 3893) is active
openstack-ceilometer-notification (pid 3890) is active

I know not a huge difference, but it’s still somewhat handy.

What’s Coming in OpenStack Networking for the Kilo Release

Originally posted on Red Hat Stack:

KiloOpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue”…

View original 2,596 more words

Introduction to the New RHEL OSP 7 Installer – RHEL OSP- director

RedHat-Openstack-Logo

The downstream version of RDO – Manager, which is the current installer that is shipped with the current release of RDO, will be called RHEL OSP director when it is released as part of RHEL OSP 7 (Openstack Kilo).

OSP- director (note the lowercase d in director) will not only install RHEL OSP 7, it will also handle updating and upgrading as well as lifecycle management.

These three additional functionalities are not currently available in RHEL OSP 6 – Foreman/Staypuft installer.

Below is a nice demo of the upstream version of the installer