Asus RT-AC66U – Installing the ipkg Command

asus-rt-ac66u

Note: The RT-AC66U and the RT-AC66R are identical other than their product number. The RT-AC66R is the product sold through Best Buy and RT-AC66U is the product sold directly from ASUS.

IPKG is a cli utility used for package management. It is required to have ipkg installed if you want to configure your router for SNMP,  install BIND, or view I/O to a usb disk as you will need to install software packages that are not included in the stock firmware.

Note: I am running the ASUSWRT-Merlin firmware, you can find it here. The stock firmware may be a bit different, so you might not see the exact screens as shown below.

Prerequisite : USB Drive

To install ipkg you will need to install Download Master.  To Install Download Master you will need to have a USB drive plugged into the router and mounted. In the image below you can see my USB disk, labeled as “SMI USB DISK”

asus-0

Troubleshooting USB Drives

I had a bit of an issue here as I was trying to mount a drive that was formatted as EXT4. Apparently this was not supported. See error below.

EXT3-fs: sda1: couldn’t mount because of unsupported optional features (240).
EXT2-fs: sda1: couldn’t mount because of unsupported optional features (240).

I pulled the drive, reformatted as EXT3 and was off and running.

usb 1-1.1: USB disconnect, address 4
usb 1-1.2: new high speed USB device using ehci_hcd and address 5
usb 1-1.2: configuration #1 chosen from 1 choice
scsi2 : SCSI emulation for USB Mass Storage devices
scsi 2:0:0:0: Direct-Access SMI USB DISK 1100 PQ: 0 ANSI: 4
sd 2:0:0:0: [sda] 8196096 512-byte hardware sectors (4196 MB)
sd 2:0:0:0: [sda] Write Protect is off
sd 2:0:0:0: [sda] Mode Sense: 43 00 00 00
sd 2:0:0:0: [sda] Assuming drive cache: write through
sd 2:0:0:0: [sda] Assuming drive cache: write through
sda: sda1
sd 2:0:0:0: [sda] Attached SCSI removable disk
sd 2:0:0:0: Attached scsi generic sg0 type 0

You can find the list of supported formats and partition sizes here. I would suggest EXT3. NTFS and FAT32 are supported, but I have heard that you might run into issues.

Installing Download Master

Now that you have your USB disk mounted you can install Download Master.

Select USB Applications on the left pane. Then click on Download Master. In the example below, you can see that I have already performed the install.

asus-2

You should see the screen below once Download Master is installed.

asus1

Accessing the Router via the CLI

Now you can either telnet or SSH to your router using its LAN IP address.  Note that telnet is the default protocol, however you can enable SSH by clicking on “Administration” in the lower left pane, and then clicking the “System” tab. Under “SSH Daemon” select “Enable SSH”.

asus-3

Ipkg installs to the path shown below.

# which ipkg
/opt/bin/ipkg

A guide to using ipkg can be found here.

 

 

Introduction to Managing OVS Bridges

London Bridge

Open Vswitch is an Open Source software switch designed specifically to be used in virtualized environments such as OpenStack or RHEV-H.  OVS (Open Vswitch) was designed to make it easier to manage, configure, and monitor virtual traffic within a virtualized environment.

Below, is Part 1 in what I suspect will be a multiple part series on configuring, viewing, and managing your virtualized network via OVS.

Viewing OVS Bridges

To view OVS Bridges configured on a system, use the command ovs-vsctl as shown below. On my test system, we have three configured bridges; br-ex, br-int, and br-tun.

# ovs-vsctl show
b5aa3cf4-d962-4cb2-b3b6-20f0b4858f59
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port “eth0”
Interface “eth0”
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}

Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal

Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.4.0”

The output from the command above is rather verbose. If you prefer a more terse output you best try the command below. In this instance the only output is the name of our bridges.

Continue reading

Red Hat Openstack & Big Cloud Fabric Introduction

BSN-Logo-color

 

Big Cloud Fabric, is a SDN solution from Big Switch Networks, designed to integrate into OpenStack (or VMware).

Via OpenStack, BCF (Big Cloud Fabric) integrates directly into OpenStack Neutron by way of a plugin. BCF supports L2/L3 networking and L4-7 insertion. BCF runs on whitebox, or brightbox hardware.

Below are a couple of videos that will give you a high-level view of the solution and let you see it in action.

Red Hat & Big Switch: Integrated OpenStack Solution for Simplified Cloud Deployment:

Webinar – Unified P+V Networking for OpenStack Clouds:

 

You can also access the Big Switch Labs and play around with the techology via the link below. Note that you will have to provide an email address.

http://labs.bigswitch.com/home

 

RHEV 3.5: Recovering from a Catastrophic Host Failure

 

Cat-day.jpg

A while ago, I needed to tear down one of the nodes in my RHEV cluster, as I wanted to re-purpose it as a RHEL7 host. A few hours before a planned to shut it down, I logged into my RHEV-M console and put the node in “Maintenance” status.

Later, when I came home I powered off the node and rebuilt it, not once checking to ensure that the VMs running on the host had migrated properly. They had not, and this was a problem. I unknowingly fubared my cluster.

When I logged back into RHEV-M, I found the several VMs with a status of “?” or “Unknown State“, and a physical host that I was unable to remove from the cluster, despite the fact that it had been rebuilt.

So now what….

Fix the VMs first…

There were several VMs that were running on the now retired RHEV-H host when the server was powered off. I needed to fix these before I could remove the host. I tried powering them off, on, or migrating them from the WebUI, but this did not work. So I had to delete them manually from the RHEV-M database. Note that its possible that I could have figured out how to save the VMs instead of deleting them, but this was not a priority since this is my homelab environment.

First lets SSH to the RHEV-M server and log into the database.

# source /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
# export PGPASSWORD=$ENGINE_DB_PASSWORD
#psql -h localhost -U engine engine

Now we figure out the vm_guids for each VM.  Below I am starting with my logging server (log.lab.localdomain).

Below we are telling RHEV to mark each VM as Powered Off. Nothing here will delete your VM.

engine=> select vm_guid from vm_static where vm_name = ‘log.lab.localdomain’;
vm_guid
————————————–
f2c43e33-fd02-4b74-b86c-9e9ff9b8c51b
(1 row)

engine=> update vm_dynamic set status = 0 where vm_guid = ‘f2c43e33-fd02-4b74-b86c-9e9ff9b8c51b’;
UPDATE 1

I needed to run through this process a total of four times, once for each VM that was orphaned, scared, and alone.

Now put the node in Maintenance Mode…

First open up another ssh session to your RHEV-M host, and stop Jboss

# service jbossas stop

Now back in your other window, which should still be connected to the engine database….

My node is named titan.lab.localdomain – we need to get ids vds_id.

engine=# select vds_id, storage_pool_name from vds where vds_name = ‘titan.lab.localdomain’ ;
vds_id | storage_pool_name
————————————–+——————-
e59198b0-fc75-4c5d-b31e-1ab639a1f708 | Auburn_Datacenter
(1 row)

Now lets put it in maintenance mode.

engine=# update vds_dynamic set status = 4 where vds_id = ‘e59198b0-fc75-4c5d-b31e-1ab639a1f708’;
UPDATE 1

Now lets start jboss in our other window.

# service jbossas start

At this point I was able to log back into the WebUI and remove the host.

HOWTO: OpenStack Deployment using TripleO and the Red Hat OpenStack Director

Keith Tenzer

ooo

Overview

In this article we will look at how to deploy an OpenStack cloud using TripleO, the upstream project from the Red Hat OpenStack Director. Regardless of what OpenStack distribution you are using OpenStack is essentially OpenStack. Everyone has the same code-base to work with. The main differences between distributions are around what OpenStack projects are part of distribution, how it is supported and the deployment of the distribution. Every distribution has their own OpenStack deployment tool. Clearly deployments differ as they are based on support decisions each distribution makes. However many distributions have created their own proprietary installers. Shouldn’t the OpenStack community unite around a common installer? What would be better than using OpenStack to deploy OpenStack? Why should OpenStack administrators have to learn separate proprietary tooling? Why should we be creating unnecessary vendor lock-in for OpenStack’s deployment tooling? Installing OpenStack is one thing but what about upgrade and…

View original post 2,368 more words

OpenStack Ironic: Manual Node Tagging and Mapping

RedHat-Openstack-Logo

Some of the OpenStack director guides that I have seen and read suggest using ACH Tools to perform role matching, which matches Ironic nodes to the most appropriate roles (flavors) bases on their CPU, Memory, and Disk configurations.

However if you prefer to manually assign a role to a node follow the steps outlined below.

Creating Flavors

Before we can manually assign a role to a flavor, we must first create the flavor. In this example we are going to create 4 flavors.

First lets create the flavor that will be used for our OpenStack Overcloud Controllers.

# openstack flavor create –id auto –ram 8192 –disk 26 –vcpus 4 control
+—————————-+————————————–+
| Field | Value |
+—————————-+————————————–+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 26 |
| id | 4007f8a4-6ea9-400e-9142-6926f0538b29 |
| name | control |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+—————————-+————————————–+

Now lets create the flavor that will be used for our OpenStack Overcloud Compute Nodes.

# openstack flavor create –id auto –ram 6144 –disk 18 –vcpus 2 compute
+—————————-+————————————–+
| Field | Value |
+—————————-+————————————–+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 18 |
| id | 60d05417-d2ba-4f62-a37c-1bafbfaec30a |
| name | compute |
| os-flavor-access:is_public | True |
| ram | 6144 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+—————————-+————————————–+

Since our POC deployment will also include CEPH nodes, we need to also create a CEPH role.

# openstack flavor create –id auto –ram 4096 –disk 18 –vcpus 2 ceph
+—————————-+————————————–+
| Field | Value |
+—————————-+————————————–+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 18 |
| id | 5480e8e6-f895-4870-b4dc-e5fff5a32911 |
| name | ceph |
| os-flavor-access:is_public | True |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+—————————-+————————————–+

Important Note: You need to have a baremetal role created otherwise your deployment will fail, use the command below to create it.

# openstack flavor create –id auto –ram 8192 –disk 58 –vcpus 4 baremetal

Now let’s check our work with the command below. You should see your 4 roles listed (Baremetal, Ceph, Control, and Compute).

#openstack flavor list

Continue reading

OpenStack director Overcloud Image Build Using the RHN Portal

a17f5-6a00e551c39e1c8834015436149671970c-pi

Most of the documentation that I see on the subject of building your own Overcloud images use repos that are not available to most users. Today, I am going to document two additional scenarios. One, using a local Satellite server, and the other using the Red Hat (RHN) Portal.

RHEL Portal Registration

Here are the steps to use if you want your overcloud machines to register to the RHEL portal.

First export a bunch of ENV variables.

export NODE_DIST=rhel7
export DIB_LOCAL_IMAGE=rhel7-guest.qcow2
export USE_DELOREAN_TRUNK=0
export REG_METHOD=portal
export REG_USER=”RHN-USERID”
export REG_PASSWORD='<RHN-PASSWORD!’
# Find this with `sudo subscription-manager list –available`
export REG_POOL_ID=”[pool id]”
export REG_REPOS=”rhel-7-server-rpms \
rhel-7-server-extras-rpms \
rhel-ha-for-rhel-7-server-rpms \
rhel-7-server-optional-rpms \
rhel-7-server-openstack-7.0-rpms”

Now start your build. Should take about 30 minutes, so be patient.

#time openstack overcloud image build –all 2>&1 | tee openstack_image_build.log

 

RHEL Satellite Registration

When registering to a local satellite server, use the format below. Note that you must use an activation key, as using id and password authentication is not supported for security reasons.

export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname
export REG_SAT_URL=”[satellite url]”
export REG_ORG=”[satellite org]”
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-7.0-rpms
export REG_ACTIVATION_KEY=”[activation key]”

Now start your build. Remember to be patient.

time openstack overcloud image build –all 2>&1 | tee openstack_image_build.log