How to show ethernet connection link speed on Asus AiMesh routers via SSH. While you can see the link speeds via the Web UI for the main AiMesh router, you cannot, as far as I can tell, find the same info for each of your AiMesh nodes. Normally on a Linux system I would use Ethtool to show my such info, but we are running ASUSWRT, and do not want to have to install any additional packages.
Enter robocfg
The command you are looking for is “robocfg“
Below is an example of the output from one of my AiMesh Nodes (not primary). Specifically I am looking to see the connection speed on port0 as this is my backhaul network
Command below.
admin@RT-AC68U-CA00:/tmp/home/root# robocfg show
Switch: enabled
Port 0: 1000FD enabled stp: none vlan: 2 jumbo: off mac: f8:0f:f9:98:a3:1e
Port 1: 100FD enabled stp: none vlan: 1 jumbo: off mac: 9c:8e:cd:09:f6:1b
Port 2: 100FD enabled stp: none vlan: 1 jumbo: off mac: 9c:8e:cd:13:32:5f
Port 3: 100FD enabled stp: none vlan: 1 jumbo: off mac: 9c:8e:cd:12:32:29
Port 4: DOWN enabled stp: none vlan: 1 jumbo: off mac: 00:00:00:00:00:00
Port 5: 1000FD enabled stp: none vlan: 1 jumbo: off mac: 68:1d:ef:21:d3:9b
Port 7: DOWN enabled stp: none vlan: 1 jumbo: off mac: 00:00:00:00:00:00
Port 8: DOWN enabled stp: none vlan: 1 jumbo: off mac: 00:00:00:00:00:00
VLANs: BCM5301x enabled mac_check mac_hash
1: vlan1: 1 2 3 4 5t
2: vlan2: 0 5t
1045: vlan1045: 0 2 3t 7 8t
1046: vlan1046: 1 3 4t 5 7
1047: vlan1047: 0t 4t 7
1099: vlan1099: 3t 4
1100: vlan1100: 5t 8u
1101: vlan1101: 0t 3 4 7 8u
1102: vlan1102: 1 2 3t 4t 5t 7t
1103: vlan1103: 1 3t 4 5
Screenshot for reference
What else does robocfg do?
Appears that you can also use robocfg to set link speeds and disable/enable ports – Here is more info
This post will show you how to find the dynamic ip address assigned to your AiMesh node, and then test connectivity via ping.
Currently using 3 Asus AiMesh (similar to what is shown above) nodes in my home network to provide decent wireless coverage in my home. With so many devices using wireless these days, you are bound to run into a number of devices that just have terrible WIFI radios, and therefore are difficult to keep online unless they are right next to the wireless router.
I recently experienced an issue with poor connectivity between nodes, this was mostly due to the fact that I was not using a wired backhaul between nodes to connect the “mesh“, rather I was using 5ghz wifi to connect the nodes. This is not optimal, so I purchased a couple of Powerline ethernet adapters to “wire” the backhaul,
Note, My main router is a RT-AC68R. I am currently running Firmware Version 3.0.0.4.386_43129, directly from Asus, have not explored using Merlin with my AiMesh routers.
Once you connect an existing router to your main AiMesh router, it appears that any static IP addresses assigned to any of the devices removed and an IP address is statically assigned. Finding that IP is the challenge, as you need that to test connectivity
Finding IP addresses of AiMesh Nodes
Select Network Map, if this is not the default screen you see after login.
Click the Number by “AiMesh Node” – I have two additional nodes, so the number I see is “2“. Click this number.
You will now see your AiMesh nodes in the right pane.
Now select each node individually by clicking on them. This will result in a pop-up window that will show you the IP of each node. In the example below, the IP address of my node in the basement is 192.168.0.164.
Testing Connectivity via Ping
Select Network Tools under Advanced Settings in the left pane.
Using Ping as method, enter your AiMesh Node IP address in the target, set count. I suggest at least 10 or more packets. Then mash Diagnose. This will show your the time a packet and its response take to travel to the selected IP and then back again.
Late last year I wrote a blog post that outlined a process of injecting deprecated storage controller drivers into RHEL 8 via a Driver Update Disk. In a nutshell, this process allows you to install RHEL 8 on the R710/R610 (11th Gen)
This process worked fine, unless you wanted to yum update your server. Rather than attempt to find a repeatable process of injecting drivers prior to updating, I decided to upgrade my RAID controller to one that was supported.
My 11th generation, R710, came with a Perc H700. Identified below.
[root@virt01 ~]# lspci -knn | grep 'RAID bus controller'
03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)
Based on feedback from friends and co-workers. I purchased a PERC H330, which I believe ships with the 13th Gen Dell rackmount servers.
These cards can be found for pretty cheap on Ebay.
Plus you will need 2 new cables. You are looking for SFF8643 to SFF8087 (mini SAS HD to mini SAS)
Output from one of my Dell R710s. Note I did not remove the original card.
[{ocptest}root@virt02 ~]# lspci -knn | grep 'RAID bus controller'
03:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)
05:00.0 RAID bus controller [0104]: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] [1000:005d] (rev 02)
Afterthoughts
I’ve only performed this swap on one of my R710s, I have 2 more as well as 2 R610s. Below are are few things to consider if you are looking to perform a similar upgrade
Take a hard look at your existing server. Do you need to purchase a full sized pci card or a mini card. This might likely be the case on the 1u servers.
Remove the old card. The server does not quite like having a raid controller installed without cables.
If you buy a full sized pci card, make sure you get one with the bracket that you need (full sized or short)
You may want to find a cable with 90 degree connectors for the backplane. Mine fit fine on the 710, but I have heard that the 90 degree connector is best for 1U servers.
Once the initial baremetal OpenShift 4.6 IPI install is complete, you will see output similar to what is shown below.
This output contains the URL for the OpenShift WebUI along with the “kubeadmin” password. You will be able to log into the WebUI using these credentials.
If you forget your kubeadmin login (which you will,. because it’s ugly) you will be able to find in the auth directory.
Upon initial login to the WebUI you will see a blue bar at the near the top of the page – as shown below.
Adding a New User with Admin Rights
Bottom line, the Openshift “kubeadmin” password is not user friendly. In my lab I do not want to have to look it up each time I attempt to access the webui. I need simple and easy to remember credentials.
So I am going to create a new admin user. In the example below I am creating a user named ‘admin‘ with the password of ‘admin‘.
I am working in a homelab, so security is not exactly a priority.
{ocp}[kni@bastion ~]$ htpasswd -c -B -b users.htpasswd admin admin
Adding password for user admin
Now I define a secret which uses the HTPasswd user file as shown below.
Next, I create a custom resource (htpasswd.cr) that defines the use of the HTPasswd identity provider. This file is straight out of the user doc and is unmodified.
Now let’s test your new id and password on the CLI. If the step below fails, you will need to start all over again.
{ocp}[kni@bastion ~]$ oc login -u admin
Authentication required for https://api.ocp.lab.localdomain:6443 (openshift)
Username: admin
Password:
Login successful.
You have access to 59 projects, the list has been suppressed. You can list all projects with ' projects'
Using project "default".
Navigate to the URL above and select my_htpasswd_provider. Enter your new credentials. At this point you should be able to login without issue. If not, please work through the process again.
Troubleshooting
If you do run into errors, the files that you created and run the command below to delete the secret.
By default, your username for the console is kubeadmin, use the command below to obtain your password. Note, your cluster directory may not be clusterconfigs however this is the default
[kni@bastion ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready master,worker 6d3h v1.19.0+9f84db3
master-1 Ready master,worker 6d3h v1.19.0+9f84db3
master-2 Ready master,worker 6d3h v1.19.0+9f84db3
Show baremetal hardware nodes.
Oops I have errors..
[kni@bastion ~]$ oc get bmh -n openshift-machine-api
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR
master-0 OK externally provisioned ocp-d9z87-master-0 ipmi://10.1.99.2:6230 true
master-1 OK externally provisioned ocp-d9z87-master-1 ipmi://10.1.99.2:6231 true
master-2 OK externally provisioned ocp-d9z87-master-2 ipmi://10.1.99.2:6232 true
worker-0 error registration error ipmi://10.1.99.2:623 true Failed to get power state for node c8105b63-3697-419c-9cab-fdfa9985411a. Error: IPMI call failed: power status.
worker-1 error inspecting ipmi://10.1.99.2:6234 true Introspection timeout
[kni@bastion ~]$ oc project
Using project "openshift-machine-api" on server "https://api.ocp.lab.localdomain:6443".
Create a New Project
[kni@bastion ~]$ oc new-project testproject
Now using project "testproject" on server "https://api.ocp.lab.localdomain:6443".
Change Current Project
If the project does not already exist, you will get an error
[kni@bastion ~]$ oc project testproject
Now using project "testproject" on server "https://api.ocp.lab.localdomain:6443".
Show Status of Current Project
[kni@bastion ~]$ oc status
In project testproject on server https://api.ocp.lab.localdomain:6443
You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.
[kni@bastion ~]$ oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-node-lease Active
kube-public Active
kube-system Active
openshift Active
openshift-apiserver Active
openshift-apiserver-operator Active
--trunc--
Working with Pods
Show all Pods in a Particular Namespace
oc get pods will shows you the pods in your current namespace. Use -n to specify a pod by name. Use -A to specify all pods.
[kni@bastion ~]$ oc get pod -n openshift-dns
NAME READY STATUS RESTARTS AGE
dns-default-64gcs 3/3 Running 0 4d1h
dns-default-dttgz 3/3 Running 0 4d1h
dns-default-jwmdx 3/3 Running 0 4d1h
Wide listing of Pods in a Particular Namespace
-o wide allows you to see the pod IP address and the node where it is running.
For better or worse, mmcli is pretty much the default mechanism for configuring networking on RHEL8/CentOS8. I cannot as of yet, say I am a fan. However, I have been working with ifconfig files for years and held on to that method of networking configuration until they were dragged from my cold and bloated hands.
Show Bridges
# nmcli con show
NAME UUID TYPE DEVICE
eno1 e875f081-1ef6-4995-bb12-0f5ff50f1095 ethernet eno1
provisioning 272457e0-8a6b-4e93-8902-f28c07f45cdc bridge provisioning
baremetal f6d0909c-9ee7-4aa6-8902-b8ce9171fa93 bridge baremetal
vlan98 1db123b9-0e43-453d-ba7b-757213ed377d vlan eno1.98
vlan99 9d5d886b-e9bd-45c3-8dc4-c958b73b232e vlan eno1.99
vnet0 73344ec4-37e4-487d-9a07-775ddab2b05e tun vnet0
vnet1 cb4622dc-7987-4410-8497-780f89195a2f tun vnet1
eno2 c1b3674d-cc63-414c-90bb-69e00759efd8 ethernet --
eno3 5f12b893-3333-4c8f-91c2-54512d7e7f09 ethernet --
eno4 4bbbcace-9ffe-43df-863a-2b19585d98ee ethernet --
Show a Bridge and Connected Interfaces
Format is as shown below.
nmcli -f GENERAL.DEVICE,GENERAL.TYPE,BRIDGE.SLAVES device show bridge_device_name