Production server with static IPs on eth0 and eth0:1 - need private LAN on eth1
OK, so I have two servers on a vlan at my datacentre/colocation and previously both servers had public IPs on their eth0 interfaces.
The servers are HP ProLiant DL360s - one is a G4 and one is a G5
The newer G5 is now the LAMP server and the G4 has been retired and I want to repurpose it as an iSCSI target using openfiler freenas or similar.
My G5 has public/static IPs lashed to the eth0 physical interface and the eth1 is not configured to do anything yet.
The G4 will have both interfaces available - perhaps one for ssh access from one of my static public IPs and the other to be a private IP on the local vlan.
Here is what I am trying to get my head around...
eth0 - Public IP - full LAMP services on two or three virtual interfaces
eth1 - Private IP 192.168.0.1
eth0 - Public IP for ssh
eth1 - Private IP 192.168.0.2
Because my traffic between eth1 on these boxes is via private IPs on the local private vlan it doesn't add to my quota for bandwidth.
How do I go about configuring the routing and gateways and other aspects of this so that I can run a private IP space network between the eth1s and still serve the outside world from the eth0s...
I am afraid that if I assign the private IPs to the eth1 interfaces the routing may either not work or interfere with the access to the production internet facing interfaces (eth0s)
If anyone can see my immediate predicament and thinks they can help me understand how to proceed I would appreciate the opportunity to solve this so I can get my iSCSI on the G4...
You don't need to do any routing or set up any gateways, the eth1 interfaces are on the same network so will communicate happily with each other assuming they have layer 2 connectivity
Thanks, that is encouraging. I am reading on some of the associated/related/suggested threads at the moment too and have already broken things a couple of times - thank HP for iLO :) nothing broken for long...
OK so I have tried something very simple:
/sbin/ifconfig eth1 192.168.0.102 up
and now have:
eth1 Link encap:Ethernet HWaddr 00:22:64:9B:D7:AA
inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::222:64ff:fe9b:d7aa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:64 (64.0 b) TX bytes:914 (914.0 b)
but of course the routing is all set up to direct everything via the default route right?
So whilst I can ping 192.168.0.102 now there is no way that I can ping 192.168.0.101 (the G4)
Here is my route -n
root@jupiter [/home/stardotstar]# /sbin/route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
202.xxx.yyy.117 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
202.xxx.yyy.112 0.0.0.0 255.255.255.248 U 0 0 0 eth0
202.xxx.zzz.48 0.0.0.0 255.255.255.248 U 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 202.xxx.zzz.49 0.0.0.0 UG 0 0 0 eth0
So I can see that the private network is added to eth1 but how is it that I can add a route to other interfaces on the 192.168.0.0 network (ie the G4 on the same vlan with its eth0 configured to 192.168.0.101) without interfering with the production IPs?
I think you're confusing yourself:
To reach a network that is not directly connected to your server and that is not accessible via the default gateway, you need to add a static route.
A static route says 'to reach network X send the traffic to gateway Y' - where one of gateway Y's interfaces is on a the same IP network as one of your servers interfaces.
[192.168.0.1/24](gateway Y)[10.0.0.1/24]---[10.0.0.2/24](server)[10.0.1.2/24]---[10.0.1.1/24](default gateway)...
If your server wants to communicate with hosts on the 192.168.0.0/24 network, you will need to add a static route.
Hope this make things a little clearer
OK, thank you things are beginning to be clearer to me.
Here is where I am at
has its internet production interface as originally configured - all working
PLUS I have manually upped the eth1 with the private address space.
I can confirm that there is a network route for the private network bound to eth1 created as part of upping the interface.
I can also confirm that the media is linked up...
But as you can see here on each system the only ping that returns is the one to the local ip bound to the host's adapter...
Perhaps there is something else wrong.
What I can say beyond all this is that I can ping the public IPs from the boxes and ssh between them and so forth. But I am guessing that is all going outside the vlan due to the vlan routing table setup by the CoLo...
I have attached a pic to ensure that my desired config is beyond doubt.
 I just got a sense of the problem from a static route point of view from what you said here:
So I need to add the correct static route to the table so that all packets for 192.168.0.0 go to the appropriate ethX...
I'd suggest that you're probably lacking layer 2 connectivity. Either the eth0 and eth1 nics are patched into different switches or the switch ports are assigned to different vlans.. you'll need the network guys to tell you / fix things.
Ideally your config would look something like :
I have lodged a ticket and see what I can find out from them.
Is there any tests I can do from either server to determine if this is the place where the problem lies.
Can i determine anything from the config of the other interfaces which are routing to the internet>
I am supposedly on one vlan - I have been given numbers for my interfaces on the switch and a number (107) which I understood to be my private vlan.
I run two public subnets with like 6 total hosts addresses - broadcast, network, gateway + 3 usable IIRC
I appreciate your insight and suggestions kpb thankyou
The fact that you can't ping one server from the other suggests that the interfaces are not on the same vlan... that's about the extent of the testing you can do
The plot thickens as it takes a turn in a new direction!
I thought that perhaps an over zealous firewall configuration may be interfering in all this and so I stopped the csf firewall on the G5 and stopped the iptables then did the same for the shorewall and iptables on the G4
Sure enough - PROGRESS.
So the Layer2 Connectivity is Fine (I have closed my ticket)
Now I am onto learning the finer points of my firewalls - thanks for all the help and encouragement guys!@
|All times are GMT -5. The time now is 04:14 AM.|