Solaris / OpenSolarisThis forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I can SSH into my Solaris 10 box just fine from the same subnet, for instance:
server: 192.168.23.23
client: 192.168.23.46
Both are on the same side of the enterprise firewall. However, when I am on the outside of the firewall and use the company VPN, I am unable to SSH into the server. It just waits for a while and says "unable to connect". The VPN is in an entirely different IP range of:
10.40.x.x
This is the only thing I can see as a difference. I am able to SSH through the VPN to other linux boxes inside the firewall, just not this Solaris 10 box.
Is it a netmask issue? Currently it is 255.255.255.0.
I installed Solais myself, so I know I didn't set up any IP filtering.
Very little to go on, but presuming there are no other firewalls between your endpoints, and no restrictions on the VPN configuration itself, I'd be checking the return route. Maybe the default gateway on the server is wrong? Can you SSH to another local box and then connect to it?
I guess the route to the 10.40.0.0/16 network is not know to the solaris machines. Also this depends on the overall network architecture.
Could just check what routes are present on the other machines and compare those with the faulty ones.
To make sure its a route issue you could run a tcpdump session or just create a log target from and to this network with iptables.
Very little to go on, but presuming there are no other firewalls between your endpoints, and no restrictions on the VPN configuration itself, I'd be checking the return route. Maybe the default gateway on the server is wrong? Can you SSH to another local box and then connect to it?
Yes, I can SSH to other local boxes and then SSH to it.
I guess the route to the 10.40.0.0/16 network is not know to the solaris machines. Also this depends on the overall network architecture.
Could just check what routes are present on the other machines and compare those with the faulty ones.
To make sure its a route issue you could run a tcpdump session or just create a log target from and to this network with iptables.
*damn to late*
How do I check what routes are present on the other machines?
right, so you've a route for 10.0.0.0/8 via 10.0.0.10 which doesn't exist on the first box.
Oh, I forgot to mention that igb1 is connected to a SAN that has an IP of 10.0.0.50. igb1 doesn't "see the outside", it only sees a switch and the SAN. I think you might be on to something though, because the range of IP's that can't connect (on igb0) are all 10.40.x.x, 10.80.x.x, and 10.82.x.x. So if I were to change the subnet mask of that route to 255.255.255.0, would that (in theory) make only 10.0.0.x IP's route to igb1, and 10.40.x.x route normally?
As far as the usage of ip addr add 10.0.0.10/24 goes it creates a route to the 10.0.0.0/24 network automaticaly. So I'd say that one needs a route. Or is there something I dont understand the word "local" wrong.
Anyways. Create a more specific route to the SAN. Either just a very small subnet like /30 and have the other 10.0.0.0/8 routes go through the default gateway (Could just delete that one). Or create routes for 10.40.0.0/16 through the default gw. Also add 10.80/16 and 10.82/16. Also I would just create a more strict subnet for the SAN.
yes, it would. But is that a locally attached subnet? If so there shouldn't be any route tehre at all., as it's already local.
Yes, the SAN and igb1 (as well as some igb1's in other computers) are manually configured for 10.0.0.x and they are only seen to each other. igb0 is the connection to the LAN that users log in on. I suppose we could have chosen any IP range when we initially set the SAN up, but didn't see this as being a problem.
zhjim, I am sort of following you, but I'm not real clear with the /24, /16, etc parts. You are saying to create a route to 10.40.0.0 with a mask of 255.255.0.0? (as well as 10.80.x.x and 10.82.x.x) Would I need to delete anything, or just add those three? I think it makes sense to also delete the 10.0.0.0 255.0.0.0, but it's early and I'm no expert.
If I were to just change the mask of the existing route to 255.255.0.0, would it just route all 10.0.x.x traffic to igb1, and all other traffic (10.40.x.x) would be handled normally by igb0, the adapter the request came in on?
zhjim, I am sort of following you, but I'm not real clear with the /24, /16, etc parts.
The /24, /16 are just other notations for subnets. like /24 is 255.255.255.0. Its a binary thing. 255.255.255.0 has 24bits set to one.
Quote:
Originally Posted by TronCarter
If I were to just change the mask of the existing route to 255.255.0.0, would it just route all 10.0.x.x traffic to igb1, and all other traffic (10.40.x.x) would be handled normally by igb0, the adapter the request came in on?
Sounds good also I would go one subnet down to 255.255.255.0. This would leave you with 254 possible host address to use for the SAN and similar things.
This route (pun intended) would also make the creation of routes for 10.40.0.0 and alike unnecessary. Due to the default route handeling them as you already noted.
As a generel rule of thumb I normaly keep the netmask as small as possible. By this most of the time have the default route is doing its job and you dont have to worry that much.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.