2 NICs, everything accessible over 1, "No Route to Host" on the other.
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
2 NICs, everything accessible over 1, "No Route to Host" on the other.
Ok, in the Server farm I manage, I'm having a weird issue.
The Basics:
Dell PE1850
CentOS 4.4 x86
Kernel 2.6.9-42
2x e1000 network cards
eth0 is on the main network, and it works fine. I can reach all internal, DMZ, and internet resources through it without issue.
eth1 is on a second network, which is only used for iscsi connections. It's on the 10.3.1.0 subnet. There is a routing table entry that looks right to me, yet when I try pinging the anything on the 10.3.1.0 subnet, I get the following:
Code:
# ping 10.3.1.100
PING 10.3.1.100 (10.3.1.100) 56(84) bytes of data.
From 10.3.1.46 icmp_seq=0 Destination Host Unreachable
From 10.3.1.46 icmp_seq=1 Destination Host Unreachable
From 10.3.1.46 icmp_seq=2 Destination Host Unreachable
From 10.3.1.46 icmp_seq=4 Destination Host Unreachable
From 10.3.1.46 icmp_seq=5 Destination Host Unreachable
From 10.3.1.46 icmp_seq=6 Destination Host Unreachable
--- 10.3.1.100 ping statistics ---
7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6000ms
, pipe 4
I've pinged around with a number of other systems on this subnet, and that should definitely work. I've tried different ports on the switch, and different cables. so I don't think that's the issue.
And even more interestingly, if I ping that IP from another system, I can see the arp packets coming from the problem box, but no response ever comes...
I'm going to give iproute2 a try on monday, but I don't think that's the issue. iproute2 has more to do with balancing inbound and outbound connections via 2 gateways/interfaces.
I'm dealing with a much smaller issue, where all traffic should go out the primary interface except for a single subnet, which should always go in and out the secondary interface. A single static route should be all that's necessary, however, the route is simply not working right.
Anyway, I'll try it monday and report back, but I'm not optimistic. Either way, thanks for the link/input.
iproute2 has more to do with balancing inbound and outbound connections via 2 gateways/interfaces.
I'm dealing with a much smaller issue, where all traffic should go out the primary interface except for a single subnet, which should always go in and out the secondary interface. A single static route should be all that's necessary, however, the route is simply not working right.
Actually, I disagree. On RHEL-family distros, using route to set up a multi-homed server simply does not work properly. Maybe you didn't look at the short script I wrote in that post, but (assuming I am understanding you correctly) source-based routing sounds like the solution in your case.
Actually, I disagree. On RHEL-family distros, using route to set up a multi-homed server simply does not work properly. Maybe you didn't look at the short script I wrote in that post, but (assuming I am understanding you correctly) source-based routing sounds like the solution in your case.
Well... I ended up doing what I should have done in the first place... ran full updates and rebooted. And the problem disappeared. My stubborn "linux doesn't need to reboot" ethos got in the way of solving the problem I guess.
@etherag: If you're interested in following up on this thread further, I'm also curious to see what you find while analyzing traffic (i.e. using tcpdump) during active tcp sessions. I suspect you are going to see traffic to eth1 enter eth1 and then exit eth0 and/or other strange behavior.
Actually, I disagree. On RHEL-family distros, using route to set up a multi-homed server simply does not work properly. Maybe you didn't look at the short script I wrote in that post, but (assuming I am understanding you correctly) source-based routing sounds like the solution in your case.
One doesn't need source based routing unless there are NATs, firewalls or conflicting addresses involved.
the one thing that caught my eye was the 169.254 Address on eth1.. I don't believe it should have been there in your routing table..
Now that things are working for you, I'm curious if that entry is still showing in your routing table..
Yup, that entry is still in the routing table. I'm not sure why either, but it's not hurting anything, so I'm leaving it.
Quote:
Originally Posted by anomie
@etherag: If you're interested in following up on this thread further, I'm also curious to see what you find while analyzing traffic (i.e. using tcpdump) during active tcp sessions. I suspect you are going to see traffic to eth1 enter eth1 and then exit eth0 and/or other strange behavior.
I've seen the errors you're talking about before, but when I was having this problem, I had tcpdumps on all three interfaces (eth0, eth1, lo), and that wasn't happening. When I pinged 10.3.1.x, the packets didn't seem to go anywhere. It was extremely odd.
I think TimothyEBaldwin is correct in this case, the problem you're referring to happens when there's NAT/masquerading involved.
@etherag: I don't want to belabor the point, but the problem I have seen repeatedly occurred with RHEL4 when set up as a multi-homed server. It had nothing to do with NAT/masquerading.
In any case, even though my experiences don't seem to match yours, I'm glad you got it working.
@etherag: I don't want to belabor the point, but the problem I have seen repeatedly occurred with RHEL4 when set up as a multi-homed server. It had nothing to do with NAT/masquerading.
In any case, even though my experiences don't seem to match yours, I'm glad you got it working.
Hmmm... My last thought as to why it worked for me but not for you is that in my specific situation, there's no gateway on the 10.3.1.0 subnet, so there's no more complex routes. The 10.3.1.0 subnet can only be accessed via the second NIC, and nothing else can be routed through there. The situation in the links you posted had much more complicated routing issues because they were dealing with having multiple gateways available.
Either way, I'm glad it's working, and am glad for everyone's help.
The 10.3.1.0 subnet can only be accessed via the second NIC, and nothing else can be routed through there. The situation in the links you posted had much more complicated routing issues because they were dealing with having multiple gateways available.
That makes sense. The problem scenarios I described (and posted a solution for) all involved two NICs on separate subnets, with separate default routers.
Thanks for following up with those details. It's extremely dissatisfying to me to walk away from a problem without understanding what transpired, and/or why an unexpected fix worked.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.