Asus RT-AC66: Cannot Login Unless Logout Another User First

asus-rt-ac66u

 

When attempting to login to your router’s webUI, you may have run into the error below.

cannot Login unless logout another user first.

This error is usually caused by improperly closing router’s WebUI. This can occur if you are logged in and lose connectivity, or if your browser crashed.

Fortunately, the fix is simple… Assuming you have SSH enabled.

First, SSH into your router and run the two commands shown below.

admin@RT-AC66U:/tmp/home/root# nvram unset login_ip
admin@RT-AC66U:/tmp/home/root# nvram commit

See screenshot below

login-cleanup

Now, attempt to login via the webUI. You should be able to log in without issue.

If you do not have SSH or telnet enabled, you can just power cycle your router.

 

 

Home Assistant: Checking configuration.yaml for errors

alexa-512x512

See instructions below on how to verify your configuration.yaml.

Ssh into your Home Assistant host. In my environment, I am running home assistant on a Raspberry Pi 2.

First I ssh as root. I have manually set the root password and have enabled root logins via /etc/ssh/sshd_config.  I have also added a static DNS entry for my Raspberry Pi 2.

 $ ssh root@ha

Su to the user, homeassistant.

root@ha:/var/log# su – homeassistant

Then run the command below.

homeassistant@ha:~ $ /srv/homeassistant/bin/hass –script check_config
Testing configuration at /home/homeassistant/.homeassistant

If you run into an error, you should see output similar to the example below.

homeassistant@ha:~ $ /srv/homeassistant/bin/hass –script check_config
Testing configuration at /home/homeassistant/.homeassistant
17-02-27 18:20:43 ERROR (MainThread) [homeassistant.bootstrap] Invalid config for [media_player.directv]: required key not provided @ data[‘host’]. Got None. (See ?, line ?). Please check the docs at https://home-assistant.io/components/media_player.directv/
Failed config
media_player.directv:
platform: directv

In the example above, there is an issue with my directv config. I will need to troubleshoot.

 

OpenStack: Rabbitmq Cannot Join Cluster, Already a Member

rabbitmq-sh-600x600

You can run into this error when attempting to join a node into a Rabbitmq cluster when the cluster believes that a particular node is already a member of the cluster. I have run into this issue a few times and is usually seen when attempting to recover from a crash of an OpenStack controller.

I have run into this issue a few times and is usually seen when attempting to recover from a crash of an OpenStack controller.

Below are the steps to resolve the issue.

The error below is seen when attempting to add a node back into the cluster.

INFO REPORT==== 27-Jan-2017::16:57:22 ===
Already member of cluster: [rabbit@nodectrl2,rabbit@nodectrl1,
rabbit@nodectrl0]

We check the cluster status for confirmation.

root@nodectrl1 rabbitmq]# rabbitmqctl cluster_status
Cluster status of node rabbit@nodectrl1 …
[{nodes,[{disc,[rabbit@nodectrl0,rabbit@nodectrl1,
rabbit@nodectrl2]}]},
{running_nodes,[rabbit@nodectrl2,rabbit@nodectrl1]},
{cluster_name,<<“rabbit@nodectrl0.localdomain”>>},
{partitions,[]},
{alarms,[{rabbit@nodectrl2,[]},{rabbit@nodectrl1,[]}]}]

Now we force the cluster to forget the affected node.

[root@nodectrl1 rabbitmq]# rabbitmqctl forget_cluster_node rabbit@nodectrl0
Removing node rabbit@nodectrl0 from cluster …

We then check the cluster status to ensure that it has been removed from the cluster.

[root@nodectrl1 rabbitmq]# rabbitmqctl cluster_status

Cluster status of node rabbit@nodectrl1 …
[{nodes,[{disc,[rabbit@nodectrl1,rabbit@nodectrl2]}]},
{running_nodes,[rabbit@nodectrl2,rabbit@nodectrl1]},
{cluster_name,<<“rabbit@nodectrl0.localdomain”>>},
{partitions,[]},
{alarms,[{rabbit@nodectrl2,[]},{rabbit@nodectrl1,[]}]}]

We can now add our node back into the cluster.

[root@nodectrl1 rabbitmq]#  rabbitmqctl -n nodectrl1 join_cluster rabbit@nodectrl0.localdomain

OpenStack: instackenv.json Format Example

stack-example

Here is a quick and dirty example of the format of your instackenv.json. This is the file that Ironic uses to import nodes.

Enter your IPMI user id under “pm_user”

Enter your IPMI password under “pm_password”

 

{
"nodes":[
    {
        "mac":[
            "74:E6:E2:FB:71:B0"
        ],
        "cpu":"4",
        "memory":"6144",
        "disk":"40",
        "arch":"x86_64",
        "name":"control01",
        "pm_type":"pxe_ipmitool",
        "pm_user":"admin",
        "pm_password":"admin",
        "pm_addr":"10.75.99.120"
    },
    {
        "mac":[
            "74:E6:E2:FB:71:D6"
        ],
        "cpu":"4",
        "memory":"6144",
        "disk":"40",
        "arch":"x86_64",
        "name":"control02",
        "pm_type":"pxe_ipmitool",
        "pm_user":"admin",
        "pm_password":"admin",
        "pm_addr":"10.75.99.119"
    },
    {
        "mac":[
            "74:E6:E2:FB:73:D0"
        ],
        "cpu":"4",
        "memory":"6144",
        "disk":"40",
        "arch":"x86_64",
        "name":"control03",
        "pm_type":"pxe_ipmitool",
        "pm_user":"admin",
        "pm_password":"admin",
        "pm_addr":"10.75.99.118"
    },
    {
        "mac":[
            "74:E6:E2:FB:27:D4"
        ],
        "cpu":"4",
        "memory":"6144",
        "disk":"40",
        "arch":"x86_64",
        "name":"compute01",
        "pm_type":"pxe_ipmitool",
        "pm_user":"admin",
        "pm_password":"admin",
        "pm_addr":"10.75.99.117"
    },
]
}

OpenStack: How to Rebuild Ironic’s Database

cropped-97743-computer-room

Move the current database to a backup location for safe keeping.

# mv /var/lib/ironic-inspector/inspector.sqlite /root/

Restart Ironic services. Here we are hitting them all to be safe.

# for service in `systemctl list-unit-files | grep ironic | awk ‘{print $1}’`; do  systemctl restart $service; done

Now rebuild the database.

# ironic-inspector-dbsync –config-file /etc/ironic-inspector/inspector.conf upgrade

How to Resize a qcow2 Image and Filesystem with Virt-Resize

faf3a30ac4067155dd656381da179869

In this post, I am going to quickly document the process I used to grow a qcow2 disk image and increase the size of the underlying filesystem.

In this instance, I ran out of disk space on my virtual disk named undercloud.qcow2. Apparently, the underlying disk image that I was using was too small for my needs.

Below, I have used qemu image to inspect the disk size. This disk is only 10G in size.

# qemu-img info undercloud.qcow2
image: undercloud.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 9.8G
cluster_size: 65536

 

Let’s add 20G to the disk. Note, the VM must be powered down before proceeding.

# qemu-img resize undercloud.qcow2 +20G
Image resized.

Now we need to resize the underlying filesystems using “virt-resize“. Note, however, that that “virt-resize” CANNOT resize disk images in-place. So we need to use make a backup copy and use the backup copy of the qcow as input and use the original qcow as output. See example below.

First, we make a backup copy of the disk as shown below.

# cp undercloud.qcow2 undercloud-orig.qcow2

 

Then we run the command below to grow /dev/sda

# virt-resize –expand /dev/sda1 undercloud-orig.qcow2 undercloud.qcow2

Output shown below:

virt-resize: warning: unknown/unavailable method for expanding the xfs
filesystem on /dev/sda1
/dev/sda1: This partition will be resized from 10.0G to 30.0G.

We now inspect new disk

# qemu-img info undercloud.qcow2
image: undercloud.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 9.4G
cluster_size: 65536
Format specific information:
compat: 0.10

Finally, we verify that the filesystems have grown.

# virt-filesystems –long -h –all -a undercloud.qcow2
Name Type VFS Label MBR Size Parent
/dev/sda1 filesystem xfs – – 30G –
/dev/sda1 partition – – 83 30G /dev/sda
/dev/sda device – – – 30G –

We can now boot our Virtual Machine.

Red Hat OpenStack 8: Making your Undercloud Immutable

chain-and-padlock-macro-wallpaper-background

Introduction

This article will show you how to block the overcloud from being deleted.

Blocking Users from Deleting the Overcloud Stack

First make a backup copy of /etc/heat/policy.json

$sudo cp /etc/heat/policy.json /etc/heat/policy.json.orig

Run the command below to see the default stacks:delete policy.

$ sudo grep -m1 stacks:delete /etc/heat/policy.json
“stacks:delete”: “rule:deny_stack_user”,

Then, make it so that we deny anyone and everyone from removing the stack, even if you’re an admin.

Note, that this means that the policy would have to be reverted back to the original configuration to delete the stack in the future. See sed command below.

$ sudo sed -i /stacks:delete/{s/rule:.*/’rule:deny_everybody”,’/}
/etc/heat/policy.json

Verify your changes.

$ sudo grep -m1 stacks:delete /etc/heat/policy.json
“stacks:delete”: “rule:deny_everybody”,

Blocking Users from Deleting Nova Instances

In addition to blocking users from accidentally deleting your overcloud from heat, you should also block the accidental deletion of the overcloud nodes from nova.

First, run the command below to make a backup of /etc/nova/policy.json.

$ sudo cp /etc/nova/policy.json /etc/nova/policy.json.orig

Run the command below to see the default compute:delete policy.

$ sudo grep compute:delete /etc/nova/policy.json
“compute:delete”: “rule:admin_or_owner”,

Now let’s change the policy so that it blocks anyone and everyone from deleting a compute node.

$ sudo sed -i /compute:delete/{s/rule:.*/’rule:deny_everybody”,’/}
/etc/nova/policy.json

Now we can verify our changes.

$ sudo grep compute:delete /etc/nova/policy.json
“compute:delete”: “rule:deny_everybody”,