In this article I am going to walk you through the necessary steps to configure your Asus RT-AC66U as a caching dns server using bind. According to Wikipedia – “Caching name servers (DNS caches) store DNS query results for a period of time determined in the configuration (time-to-live) of each domain-name record. DNS caches improve the efficiency of the DNS by reducing DNS traffic across the Internet, and by reducing load on authoritative name-servers, particularly root name-servers. Because they can answer questions more quickly, they also increase the performance of end-user applications that use the DNS. Recursive name servers resolve any query they receive, even if they are not authoritative for the question being asked, by consulting the server or servers that are authoritative for the question. “
As you must already know, the Asus RT-AC66U runs Busybox, which is a very small but powerful embedded Linux distro. Because of this there are a lot of familiar commands available via the CLI. However, don’t get to comfortable, as this is still a very foreign land.
Note that this article assumes that you have ssh or telnet working and can log into your RT-AC66U via the CLI.
So, I’ve been hacking away in my homelab as of late, building out a CentOS kickstart server, a Git server, and a puppet server. Right now, I am working on how to roll my puppet agent installs into my kickstart process. I just started on this, so I have yet to nail it down.
So currently, when kicking a VM, I am not yet setting my new CentOS node’s hostname before the install process. Sadly I am setting it manually as I am still building my kickstarts, and they are no where near where I want them to be.
Well, this whole hostname mumbo-jumbo just creates all sorts of issues for puppet… the hostname is one thing initially, then puppet installs as part of the post, and the hostname is set manually to finalize the install. Well this is no good, as you are are not going to be able to add your new node properly until you step in and provide a bit of manual persuasion.
Now while its not hard to find documentation on how to troubleshoot puppet node and master certificate issues — see here and here for example — none of it was written to help troubleshoot the mess that I had created.
Here was my specfic error.
Error: Could not request certificate: The certificate retrieved from the master does not match the agent’s private key. Certificate fingerprint: BE:B6:B6:5E:AC:B8: ..truncated
And here verbatim, is the output that you get in response to the error above.
To fix this, remove the certificate from both the master and the agent and then start a puppet run, which will automatically regenerate a certficate.
On the master: puppet cert clean localhost.localdomain
On the agent: rm -f /etc/puppetlabs/puppet/ssl/certs/localhost.localdomain.pem puppet agent -t
So we try that and it doesn’t work. The next cert I generate identifies my node as localhost again.
So heres how to fix the issue.
# rm -rf /etc/puppetlabs/puppet/ssl
Now before we generate another certificate for our node, lets test what hostname a new cert would have using the command below.
#puppet agent –verbose –configprint certname
If the command above does not spit out the correct hostname, then you my friend, are in luck. Edit the file below
# vi /etc/puppetlabs/puppet/puppet.conf
Now change the entry below by removing the localhost.localdomain, and replacing that mess with the correct hostname
certname = correcthostname.localdomain
Now kickoff a puppet run on the node
#puppet agent -t
Log into the UI, or ssh into the puppet master, and accept the new node request.
Kick off another puppet run after you have accepted the request to seal the deal and update the new node properly.
Wow, this is a really overly complicated error for such a simple problem to resolve. Allow me to give you some background.
I am currently building my first production ready (well non-production really) XenServer cluster and ran into this issue when attempting to add my second host into the cluster. I hit google and found out that this was actually just a dns issue.
A quick check on the /etc/resolv.conf on two of my nodes, shows nothing but the following line.
; generated by /etc/sysconfig/network-scripts/ifup-post
Well great, on a standard linux box I would have just added my name server and would have been half way to the bar, but judging by the contents of the resolv.conf I figured that I was supposed to add it another way.
Well after a bit of poking around in XenCenter I found this. Click on the hostname of the XenServer, then click on the "Networking" tab, from there click on "Configure…" below the "Management Interfaces" section as illustrated below. You will then be presented with a pop-up window where you can enter your nameservers.
Once you have configured DNS properly you can then add the host to the cluster.
Note that you can also do this from the command line, however you have to go basically reconfigure your management interface.. ip, gateway, and everything that goes with it.
Seeing how fickle XenServer Clustering is regarding DNS, its probably not a bad idea to add /etc/host entries on your XenServer nodes for each server that will be in your cluster. You never know when dns might go out to lunch and you don't want your HA cluster affected.
For future reference you can check all the configuration parameters of your management interface with the following commands.
First off lets get this straight, all DNS Servers cache. However, some DNS Servers intended to only provide the caching function. Which is what we are going to configure today.
A Caching-only DNS server does not contain zone information or a zone database. Its cache only contains information based on the results of queries that it has already performed. In this case, the cache takes the place of the zone database file for the lookups that you are already doing.