Hi,
I am trying to set up a VPN using SSH tunnels. I have the tunnels created correctly and i can see each end fine.
My problem is that when the link becomes even slightly saturated pings down the tunnel go stupudly high (in the realm of 6-8 seconds)...
Whilst the link is activated if i try and ping something that doesnt go via the ssh tunnel pings are a more modest 80-150ms rather than the usual of 25ms.
I Have been trying to use tc and iptables to enforce a QOS setup and i have had partial success:
Code:
tc qdisc add dev tun0 root handle 1: prio
tc qdisc add dev tun0 parent 1:1 handle 10: sfq
tc qdisc add dev tun0 parent 1:2 handle 20: sfq
tc qdisc add dev tun0 parent 1:3 handle 30: sfq
The above qdiscs with iptables (Keep in mind this setup is for testing, its no way prudiction ready!)
Code:
iptables -A POSTROUTING -t mangle -o tun0 -p icmp -m icmp --icmp-type 8 -j CLASSIFY --set-class 1:1
iptables -A POSTROUTING -t mangle -o tun0 -p tcp -m tcp -j CLASSIFY --set-class 1:3
The following does indeed classify packets correctly and they do pass through the correct qdiscs in tc however pings are still comparably high compared to not going via the ssh tunnel (at worst in the region of 2-3 seconds)
I realize that SSH Tunnels themselves have overheads but i wouldnt have thought that it would affect ping times so much.
I do also have tc and iptables set up almost identical to this on the other side but instead of icmp type 8 it has icmp type 0 (responses) in the highest priority class...
Can anyone make any suggestions as to what might be wrong here and how i can go about resolving it? My goal would be to have traffic in 1:1 take absolute priority over everything else - keeping latencies as low as possible. (I will be using the three classes, but as i said, this was just set up for testing purposes).
The ssh tunnel is going via an ADSL link with rx of ~440KB/s and tx of ~80KB/s to a server in a Datacentre with 100MBit connection.
Many thanks