LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (http://www.linuxquestions.org/questions/linux-server-73/)
-   -   bind9.8.1 concerns after replication to an Ubuntu 12.04 LTS host (http://www.linuxquestions.org/questions/linux-server-73/bind9-8-1-concerns-after-replication-to-an-ubuntu-12-04-lts-host-4175451326/)

Habitual 02-22-2013 04:37 PM

bind9.8.1 concerns after replication to an Ubuntu 12.04 LTS host
 
Well, I told someone I would, so here it is.

My boss installed a server for a client on our grid and told me to replicate another bind9 host. Short version is I scp'd all the .hosts files over from the original server to the new one and bounced named. It seems to be doing the job. Those details are here...

and now for the good stuff...
Bosses answers are in red.
The only reply I personally have for "why x or y or z" is "because he's the Boss", so don't go there. :)
Security above all else.
He understands and shares my concerns so if Security deems a change, things could be different.

Q: Any particular reason for choosing Ubuntu LTS?
A: Because it's an LTS
Q: Did you make it a minimal OS installation?
A: It's a standard install.
Q: Any particular reason for choosing ISC BIND (PowerDNS, MaraDNS, Unbound, etc.)?
A: Because it's Bind and it's well known and it's free and it's stable
Q: Why for deities sake are your NS running Webmin? (You saw that question coming, right? ;-p) Of course
A: Because it's Webmin and it's well known and it's free and it's stable
Q: Do these machines have multiple Ethernet devices?
A: Yes. Two. eth0 is the public IP. eth1 is the non-routable IP and is used by our grid infrastructure. It should never be involved in any DNS for the domain.com.
Q: What tuning have you done so far? (Running iperf / Jperf is easy.)
A: I haven't done any myself.
Q: Same for hardening?
A: ssh-keys only!!!
Q: Are your NSes a mix of AWS instances and physical machines?
A: (You lost me on this one and I suppose some context is needed (by me) to understand the Q.) There is just the one physical machine on the new grid. ns1.dom.com is another physical host on another one of our grids. There are no AWS instances involved.
I suppose this new host to be a alternate nsN.domain.com for future use. I am guessing now that this host will become an Authoritative name server in the future...?

/etc/bind/named.conf.options:
Code:

options {
    directory "/var/cache/bind";
    version NONE;
    recursion no;
    allow-transfer{none;};
    dnssec-validation auto;
    auth-nxdomain no;    # conform to RFC1035
    listen-on-v6 { any; };
    check-names master ignore;
    check-names slave ignore;
    check-names response ignore;
};

/etc/bind/named.conf:
Code:

include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
key rndc-key {
    algorithm hmac-md5;
    secret "ou812ic";
    };
controls {
    inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { rndc-key; };
    };

The only obvious difference I see on the "old" dns host is
Code:

forwarders {
                4.2.2.2;
                };

        auth-nxdomain no;    # conform to RFC1035
        listen-on-v6 { any; };
        forward first;

This is NOT present on the new host.

I started with this... (getting started/recipe-type/howto...)

My references are:
BIND9ServerHowto
Secure-and-Reliable-Authoritative-DNS-with-BIND
Name_server (wikipedia)

and I signed up at https://kb.isc.org
If any further information is needed, fire away.


Thank you for your time.

JJ

unSpawn 03-24-2013 06:57 AM

Quote:

Originally Posted by Habitual (Post 4897780)
Q: Any particular reason for choosing Ubuntu LTS?
A: Because it's an LTS


OK. In essence the O.S. or Linux distribution only makes sense in that you require long term support and as little software, configuration and maintenance as possible to accomplish the task.


Quote:

Originally Posted by Habitual (Post 4897780)
Q: Did you make it a minimal OS installation?
A: It's a standard install.


A name server roughly needs a stable kernel with as little modules as possible, Glibc, Syslog, iptables (you don't need UFW for example but I would add ipset as it will come in handy when you're forced to mass block networks later on), ISC BIND, OpenSSH and an $EDITOR. As said before less packages means less maintenance (but rephrasing that as "less risk of re-configuration or maintenance going wrong" or "less chance of gaps in continuous usage" may be more clear).


Quote:

Originally Posted by Habitual (Post 4897780)
Q: Any particular reason for choosing ISC BIND (PowerDNS, MaraDNS, Unbound, etc.)?
A: Because it's Bind and it's well known and it's free and it's stable


OK. I was more thinking of http://en.wikipedia.org/wiki/Compari...Feature_matrix and http://www.maradns.org/DNS.security.comparison.txt


Quote:

Originally Posted by Habitual (Post 4897780)
Q: Why for deities sake are your NS running Webmin? (You saw that question coming, right? ;-p) Of course
A: Because it's Webmin and it's well known and it's free and it's stable


That's not the real reason: it's about somebody wanting a UI to manage things. IMNSHO web-based control panels should only be used by admins with a good understanding of everything involved (standards, protocols, software, configuration, etc, etc) and even then installing it should be a point of discussion. Plus installing and running Webmin comes at a price wrt dependencies (like for example Perl and ISC BIND doesn't need Perl IIRC), if left running all of the time it's a waste of allocated resources, it requires configuration, it opens up TCP/1000 (which you should SSL-ize immediately if not done already) and on top of that a name server is one of the machines you'll visit rarely admin-wise anyway! In short Webmin is not required for running a name server.


Quote:

Originally Posted by Habitual (Post 4897780)
Q: What tuning have you done so far? (Running iperf / Jperf is easy.)
A: I haven't done any myself.
Q: Same for hardening?
A: ssh-keys only!!!


Do follow OS / security best practices like the Ubuntu web site or Securing Debian handbook offers which should be generic for any machine you expose to the 'net. Remember the only purpose of this machine is to answer as much legitimate UDP/53 and TCP/53 queries and as fast as possibleso review sysctl net.ipv{4,6} (except ^.*mem settings) for fast burst traffic, its purpose should also reflect firewall usage, it shouldn't respond to bogons (as in `whois -h whois.radb.net fltr-martian`, see ipset), the kernel doesn't need a sh*tload of standard modules (black list), any relevant processes should have resource limits and admin access will only come from clearly defined networks. Additionally review attack scenarios (reflection, amplification, DoS, poisoning, etc, etc) from ye aulde http://www.cert.org/archive/pdf/dns.pdf, the CYMRU templates, and if you get your hands on it, O'Reillys "Grasshopper" book. *When you've done all of that consider creating a backup and turn that into a template for quick and efficient deployment of future name servers.


Quote:

Originally Posted by Habitual (Post 4897780)
Q: Are your NSes a mix of AWS instances and physical machines?
A: (You lost me on this one and I suppose some context is needed (by me) to understand the Q.) There is just the one physical machine on the new grid. ns1.dom.com is another physical host on another one of our grids. There are no AWS instances involved.


I can think of scenarios where having instant-on geographically dispersed slave instances can come in handy but I can't decipher the original question or comment from the above question anymore.


Quote:

Originally Posted by Habitual (Post 4897780)
I suppose this new host to be a alternate nsN.domain.com for future use.

So it's a slave.


Quote:

Originally Posted by Habitual (Post 4897780)
I am guessing now that this host will become an Authoritative name server in the future...?

That's a question answered best by the decision maker. Procedure-wise as long as you know the configuration difference between a slave and a master, know where to register name server changes and realize the time it takes to propagate those changes you'll be fine. (Famous last words ;-p)


Quote:

Originally Posted by Habitual (Post 4897780)
/etc/bind/named.conf.options:
Code:

options {
    directory "/var/cache/bind";
    version NONE;
    recursion no;
    allow-transfer{none;};
    dnssec-validation auto;
    auth-nxdomain no;    # conform to RFC1035
    listen-on-v6 { any; };
    check-names master ignore;
    check-names slave ignore;
    check-names response ignore;
};


I'm missing a logging section (including "lame-servers" category), limiting data and stack size (if applicable) and http://ss.vix.com/~vixie/isc-tn-2012-1.txt. Also review 'rndc -s thisservername status' output when done.


Quote:

Originally Posted by Habitual (Post 4897780)
The only obvious difference I see on the "old" dns host is
Code:

forwarders {
                4.2.2.2;
                };

        auth-nxdomain no;    # conform to RFC1035
        listen-on-v6 { any; };
        forward first;

This is NOT present on the new host.

That's the split instance scenario all basic DNS docs talk about: shielded caching name servers for LAN / company / whatever use and publicly accessible authoritative name servers (denying any recursion!) have completely separate functions and therefore should be completely separate instances. Trying to roll both functions into one instance is a common recipe for disaster.


//NTLB


All times are GMT -5. The time now is 04:21 PM.