Can't SSH from out of subnet IP
I can SSH into my Solaris 10 box just fine from the same subnet, for instance:
server: 192.168.23.23 client: 192.168.23.46 Both are on the same side of the enterprise firewall. However, when I am on the outside of the firewall and use the company VPN, I am unable to SSH into the server. It just waits for a while and says "unable to connect". The VPN is in an entirely different IP range of: 10.40.x.x This is the only thing I can see as a difference. I am able to SSH through the VPN to other linux boxes inside the firewall, just not this Solaris 10 box. Is it a netmask issue? Currently it is 255.255.255.0. I installed Solais myself, so I know I didn't set up any IP filtering. Tron |
Very little to go on, but presuming there are no other firewalls between your endpoints, and no restrictions on the VPN configuration itself, I'd be checking the return route. Maybe the default gateway on the server is wrong? Can you SSH to another local box and then connect to it?
|
I guess the route to the 10.40.0.0/16 network is not know to the solaris machines. Also this depends on the overall network architecture.
Could just check what routes are present on the other machines and compare those with the faulty ones. To make sure its a route issue you could run a tcpdump session or just create a log target from and to this network with iptables. *damn to late* |
Quote:
Code:
netstat -rn |
Quote:
|
Code:
ip route Code:
route |
From a working linux box (I can VPN SSH into it):
# route Code:
Kernel IP routing table Code:
# netstat -rnv |
right, so you've a route for 10.0.0.0/8 via 10.0.0.10 which doesn't exist on the first box.
|
Quote:
|
yes, it would. But is that a locally attached subnet? If so there shouldn't be any route tehre at all., as it's already local.
|
As far as the usage of ip addr add 10.0.0.10/24 goes it creates a route to the 10.0.0.0/24 network automaticaly. So I'd say that one needs a route. Or is there something I dont understand the word "local" wrong.
Anyways. Create a more specific route to the SAN. Either just a very small subnet like /30 and have the other 10.0.0.0/8 routes go through the default gateway (Could just delete that one). Or create routes for 10.40.0.0/16 through the default gw. Also add 10.80/16 and 10.82/16. Also I would just create a more strict subnet for the SAN. |
Quote:
zhjim, I am sort of following you, but I'm not real clear with the /24, /16, etc parts. You are saying to create a route to 10.40.0.0 with a mask of 255.255.0.0? (as well as 10.80.x.x and 10.82.x.x) Would I need to delete anything, or just add those three? I think it makes sense to also delete the 10.0.0.0 255.0.0.0, but it's early and I'm no expert. If I were to just change the mask of the existing route to 255.255.0.0, would it just route all 10.0.x.x traffic to igb1, and all other traffic (10.40.x.x) would be handled normally by igb0, the adapter the request came in on? |
Quote:
Quote:
This route (pun intended) would also make the creation of routes for 10.40.0.0 and alike unnecessary. Due to the default route handeling them as you already noted. As a generel rule of thumb I normaly keep the netmask as small as possible. By this most of the time have the default route is doing its job and you dont have to worry that much. |
IT WORKS!!!!!! In /etc/netmasks I added the line:
Code:
10.0.0.0 255.255.255.0 Thank you zhjim and acid_kewpie for all of your help!! |
Your welcome. Please mark the thread as solve. Use the "Thread Tools" at the top of the page.
|
All times are GMT -5. The time now is 04:14 PM. |