Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have 3 servers. All need access to the internet via a firewall (8 ports) at 192.168.1.1. There is a 1Gbps switch (24 ports) between the servers, obviously behind the firewall.
Server A is a web server and will be accessible via the internet (still behind the firewall)
Server B is a database server and will only need to access the internet to get security patches.
Server C is an iSCSI target for Server A and will only need to access the internet to get security patches.
Server A needs to access the database on Server B. Server B really doesn't need access to Server C, but it would be an added benefit.
Server A, B, and C all have 2 1Gbps nics. If necessary, I can get another dual-port NIC for each.
I want to keep the web traffic on Server A off of the same nic that the iSCSI traffic is on. Is this possible since I want them all to have access out to the internet, if even only temporarily?
Should I connect all servers directly to the firewall on the 192.168.1.x range and then connect them to the switch on 192.168.2.x?
don't really understand what the question is... you can segregate all you want but you've given an argument as to why you are interesting in splitting into different subnets int he first place. generally for the sort of architecture you're implying, you would place the http server in a DMZ, something your firewall should be more than capable of providing (PLEASE provide models and makes next time). after that there's really nothing interesting about their need for internet access. it's just down to your firewall to permit that... i get the feeling I've missed something though. if you have an iscsi connection then you can use a seperate unrouted subnet on whatever private address you want, just with a direct xover cable if you want. does the switch support vlans? if so you can put it back in there potentially, and just leave the iscsi server with a single nic with two vlans on it? what i would probably look to do myself is to bond and trunk all ports where possible. if you have two gig nics, turn them into a single 2gbps ethernet link with two vlans running across them. that increases throughput and resilience in one go.
Last edited by acid_kewpie; 06-22-2007 at 03:13 PM.
I was trying to simplify a little bit and probably left out one or two things that are important to note. First of all, in response to your note about models, etc., the firewall is a Cisco ASA 5505. I just have the base 50 user license, which, I believe, allows for 3 systems in the DMZ. For now, that's fine, because there's only one web server, but over the next 1-2 years, we'll be adding additional servers and load balancing them, all of which will need access to the iSCSI target. I plan to make use of the Linux Virtual Server project for load balancing, etc.
The iSCSI target (Server C above), is actually 2 servers, using DRBD to replicate the data to the second node. One of the 2 1Gbps nics is used to connect the servers with a crossover cable to provide a dedicated 1Gbps pathway for DRBD.
The switch I have currently is a Dell PowerConnect 2724. It's support for VLAN's is minimal. You can tag port as VLAN members, but there's no real VLAN configuration. So, in the admin console for it, I can add a new VLAN, but my 'configuration' for it is limited to what I want to name it. Then, I just tag which ports on the switch should be part of that VLAN. The reason I got the switch to begin with is that I want to make sure that any traffic to and from the iSCSI target goes at the full 1Gbps, as the firewall is only a 10/100.
I guess where I'm stuck is that I want the iSCSI and anything that connects to it on a separate network or subnet so that it is forced to travel over NICs that are not used for any other traffic. If I do that, I'll need additional NICs, at least for the two iSCSI machines, correct? I don't see any way I can set the switch at 192.168.2.1 and have it provide access out to the net through the firewall, which is at 192.168.1.1, right?
Hopefully that makes more sense...I may have just confused myself more though...
ooooooooooooh asa's. lovely. there's no (possible) limit on the number of hosts in a DMZ and the notion of a DMZ in the asa's is pleasantly flexible. as you bought a duff switch your scalability is limited there though... no problem though i guess. the dell official specs for the 2724 list full 802.1q support, so i'd be very keen to suggest that. also it lists 802.3ad. so here i'd take the webserver and the iscsi server and bond and trunk both ports, configuring 2 vlans on there, 192.168.1.0/24 for the internet traffic and 192.168.2.0/24 for the iscsi traffic. no, scrap that i guess... you can bond two gigabit nics into one. as such you can shift data at 2gbps across it. you are *NEVER* going to do that, right? you have a relatively small internet pipe so assuming iscsi activity is largely linear to web traffic, you just plain can't generate the requirement for such large iscsi traffic. as such on that link you would not be in danger of compromising network performance by using the same nics. the other angle to that is data security, and again logic arguments become subtler if you're using the same switch and cabling anyway, so i therefore see no reason to have a separate iscsi network (although here i admit i know *nothing* abuot iscsi other than it's scsi over ip. maybe there are technical requirements from the protocol.)
phew.
breathe.
better.
so both servers have two nics connected to the gig switch, on that same vlan you have an uplink to the cute little 5505 (isn't the power lead on it crap???). moving onwards i'd assume the other server would function in a wholly similar way to the web server, but would simply not be recieving any traffic.
one other angle about the internet access is that you *COULD* either make the web server route itself, or install a squid instance to allow the servers to actually sit right behind the box, not by the side of it. here you'd look at having 802.1q on the bonded nics to the switch, with one vlan connecting up to the asa, the other going behind to a vlan containing the iscsi and other box. diagramatically, you may well find that a more comforting architecture, even though i doubt there are really that many benefits from doing it that way
If I bond them together, don't I have an awful lot of packets zipping around and potentially collisions? I mean, I'll have the DRBD traffic, the iSCSI traffic, and the OCFS2 heartbeat traffic, all running across those. If I separate the networks, I guess I get the DRBD traffic off of there onto it's own network, which is generally recommended by the authors of DRBD. I agree, I highly doubt I'll fill 2Gbps anytime soon, and the extra speed would be great in the case of a large file (which is fairly unlikely).
On another note, I may be hosed if I want to take advantage of the 802.1q support on the dell switch -- with my license on the ASA 5505, trunking is disabled, making it impossible to bridge between 192.168.2.0/24 and 192.168.1.0/24, right? That was the one thing about the ASA 5505 that I really didn't like. Unless you buy the 'Security Plus' license, you really get screwed out of a lot of features. The Security Plus license costs more than the 5505 to begin with...
no, you'll have less collisions not more as you have more bandwidth and two separate links to the machines upon which collisions can occur. your busier link if you had seperate ones is going to be quieter as the load is spread better. at 2gbp collisions really aren't going to be an issue.
you can't "bridge" between two different subnets, do you mean routing?? what I've suggested above would never use anything up to the 5505 other than a single untagged connection (or rather an etherchannel on the 5505 to use two links cos you can), no vlan's no knowledge of additional subnets.
Yeah, I meant route. As you may have noticed, networking is not exactly my strong point... Long story, but I'm a developer, turned server admin (not by choice), that is now ending up with the configuration above. As soon as the project is making a little more, I'll be hiring a good sysadmin...
Anyway, so, it sounds like maybe I can just put everything on a 192.168.1.0/24, use bonded/trunked connections on all of the servers to essentially create a 2Gbps network, then link the dell switch to the ASA wtih 2 100Mbps connections. I do like the redundancy of the bonding/trunking of the NICs...
I'll pick up some additional cat5e and give this a whirl. Thanks for your help!
i would probably suggest the single network, as whilst my later suggestion does provide a second network, which looks good on a conceptual diagram, when it's being managed on a single switch the practical benefits tend to become minimal rapidly.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.