LinuxQuestions.org
Did you know LQ has a Linux Hardware Compatibility List?
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices

Reply
 
Search this Thread
Old 12-22-2007, 04:23 AM   #1
mw1j29353
LQ Newbie
 
Registered: Dec 2007
Posts: 2

Rep: Reputation: 0
packet loss inside GRE tunnel


Good day everybody,

few weeks ago I met some problem I can't solve myself. To be short, my home network consists of two routers running openwrt, they are located in two different flats, connected by one of city network providers. So, those routers have the following vpn addresses: 10.10.138.37 and 10.0.5.150. There is no problem in connectivity between those 2 devices, all is sent from one is for sure will be received by other. But in order to get rid of NAT between devices in 2 flats, I decided to create vpn tunnel between my devices. Inside my flats I have two subnetworks: 192.168.1.0/24 and 192.168.2.0/24. The simplest way to connect them is GRE tunnel which requires only gre encapsulation support and ip utility. Further I provide my configuration (router1 is rt53, router2 is rt109):

rt53:
Quote:
eth0.1 Link encap:Ethernet HWaddr 00:18:F3:A9:8F:04
inet addr:10.10.138.37 Bcast:10.10.138.127 Mask:255.255.255.128
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6301040 errors:0 dropped:0 overruns:0 frame:0
TX packets:4279664 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:5
RX bytes:4291124720 (3.9 GiB) TX bytes:1581250806 (1.4 GiB)

creating tunnel on startup:
start() {
TUNNEL_DEV="tun0"
REMOTE_ENDPOINT="10.0.5.150"
LOCAL_ENDPOINT="10.10.138.37"
TUNNEL_IP="192.168.2.2"
ROUTES="192.168.1.0/24"
BIND_DEV="eth0.1"

insmod -q ip_gre

ip tunnel add ${TUNNEL_DEV} mode gre remote ${REMOTE_ENDPOINT} local ${LOCAL_ENDPOINT} dev ${BIND_DEV} ttl 255

# bring the link up
ip link set ${TUNNEL_DEV} up

# give it an address
ip addr add ${TUNNEL_IP} dev ${TUNNEL_DEV}

# add any required routes
[ -z "${ROUTES}" ] || for ROUTE in ${ROUTES}
do
ip route add ${ROUTES} dev ${TUNNEL_DEV}
done

and, our tunnel iface:

tun0 Link encap:UNSPEC HWaddr 0A-0A-8A-25-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.2.2 P-t-P:192.168.2.2 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MTU:1476 Metric:1
RX packets:77693 errors:0 dropped:0 overruns:0 frame:0
TX packets:99015 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16751976 (15.9 MiB) TX bytes:72962017 (69.5 MiB)
quite the same for rt109:
Quote:
eth0.1 Link encap:Ethernet HWaddr 00:18:39:C0:21:71
inet addr:10.0.5.150 Bcast:10.0.5.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3080125 errors:0 dropped:0 overruns:0 frame:0
TX packets:1056976 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1172279848 (1.0 GiB) TX bytes:273073291 (260.4 MiB)

tunnel script:
TUNNEL_DEV="tun0"
LOCAL_ENDPOINT="10.0.5.150"
REMOTE_ENDPOINT="10.10.138.37"
TUNNEL_IP="192.168.1.2"
ROUTES="192.168.1.0/24"
BIND_DEV="eth0.1"

insmod -q ip_gre

ip tunnel add ${TUNNEL_DEV} mode gre remote ${REMOTE_ENDPOINT} local ${LOCAL_ENDPOINT} dev ${BIND_DEV} ttl 255

# bring the link up
ip link set ${TUNNEL_DEV} up

# give it an address
ip addr add ${TUNNEL_IP} dev ${TUNNEL_DEV}

# add any required routes
[ -z "${ROUTES}" ] || for ROUTE in ${ROUTES}
do
ip route add default dev ${TUNNEL_DEV}
done

and tunnel iface:
tun0 Link encap:UNSPEC HWaddr 0A-00-05-96-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.1.2 P-t-P:192.168.1.2 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MTU:1440 Metric:1
RX packets:1120795 errors:0 dropped:0 overruns:0 frame:0
TX packets:994482 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:933819889 (890.5 MiB) TX bytes:249454424 (237.8 MiB)
everything worked almost fine, but suddenly I've noticed packet loss inside my tunnel:

Quote:
}root@rt53:/# ping rt109
PING rt109 (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=3.3 ms
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=4.3 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=2.9 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=3.2 ms
64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=2.1 ms
64 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=2.8 ms
64 bytes from 192.168.1.1: icmp_seq=6 ttl=64 time=3.3 ms
64 bytes from 192.168.1.1: icmp_seq=7 ttl=64 time=4.7 ms
64 bytes from 192.168.1.1: icmp_seq=8 ttl=64 time=2.1 ms
64 bytes from 192.168.1.1: icmp_seq=9 ttl=64 time=2.6 ms
64 bytes from 192.168.1.1: icmp_seq=10 ttl=64 time=2.8 ms
64 bytes from 192.168.1.1: icmp_seq=11 ttl=64 time=3.3 ms
64 bytes from 192.168.1.1: icmp_seq=12 ttl=64 time=2.3 ms
64 bytes from 192.168.1.1: icmp_seq=14 ttl=64 time=3.2 ms
64 bytes from 192.168.1.1: icmp_seq=15 ttl=64 time=2.7 ms
64 bytes from 192.168.1.1: icmp_seq=16 ttl=64 time=2.6 ms
64 bytes from 192.168.1.1: icmp_seq=17 ttl=64 time=2.7 ms
64 bytes from 192.168.1.1: icmp_seq=18 ttl=64 time=3.6 ms
64 bytes from 192.168.1.1: icmp_seq=19 ttl=64 time=4.1 ms
64 bytes from 192.168.1.1: icmp_seq=20 ttl=64 time=5.5 ms
64 bytes from 192.168.1.1: icmp_seq=21 ttl=64 time=2.4 ms
64 bytes from 192.168.1.1: icmp_seq=22 ttl=64 time=3.5 ms
64 bytes from 192.168.1.1: icmp_seq=23 ttl=64 time=4.0 ms
64 bytes from 192.168.1.1: icmp_seq=25 ttl=64 time=3.1 ms
64 bytes from 192.168.1.1: icmp_seq=26 ttl=64 time=2.3 ms
64 bytes from 192.168.1.1: icmp_seq=27 ttl=64 time=3.0 ms
64 bytes from 192.168.1.1: icmp_seq=28 ttl=64 time=3.3 ms

--- rt109 ping statistics ---
29 packets transmitted, 27 packets received, 6% packet loss
round-trip min/avg/max = 2.1/3.1/5.5 ms
as you can see, packets 13 and 24 have gone... Where? And this is all despite nothing similar is happening in "real" network:

Quote:
root@rt53:/# ping 10.0.5.150
PING 10.0.5.150 (10.0.5.150): 56 data bytes
64 bytes from 10.0.5.150: icmp_seq=0 ttl=62 time=3.1 ms
64 bytes from 10.0.5.150: icmp_seq=1 ttl=62 time=2.0 ms
64 bytes from 10.0.5.150: icmp_seq=2 ttl=62 time=3.3 ms
64 bytes from 10.0.5.150: icmp_seq=3 ttl=62 time=2.0 ms
64 bytes from 10.0.5.150: icmp_seq=4 ttl=62 time=2.7 ms
64 bytes from 10.0.5.150: icmp_seq=5 ttl=62 time=3.2 ms
64 bytes from 10.0.5.150: icmp_seq=6 ttl=62 time=3.4 ms
64 bytes from 10.0.5.150: icmp_seq=7 ttl=62 time=3.6 ms
64 bytes from 10.0.5.150: icmp_seq=8 ttl=62 time=6.6 ms
64 bytes from 10.0.5.150: icmp_seq=9 ttl=62 time=3.0 ms
64 bytes from 10.0.5.150: icmp_seq=10 ttl=62 time=4.2 ms
64 bytes from 10.0.5.150: icmp_seq=11 ttl=62 time=3.7 ms
64 bytes from 10.0.5.150: icmp_seq=12 ttl=62 time=3.6 ms
64 bytes from 10.0.5.150: icmp_seq=13 ttl=62 time=3.5 ms
64 bytes from 10.0.5.150: icmp_seq=14 ttl=62 time=3.5 ms
64 bytes from 10.0.5.150: icmp_seq=15 ttl=62 time=8.5 ms
64 bytes from 10.0.5.150: icmp_seq=16 ttl=62 time=2.3 ms
64 bytes from 10.0.5.150: icmp_seq=17 ttl=62 time=2.6 ms
64 bytes from 10.0.5.150: icmp_seq=18 ttl=62 time=3.8 ms
64 bytes from 10.0.5.150: icmp_seq=19 ttl=62 time=2.9 ms
64 bytes from 10.0.5.150: icmp_seq=20 ttl=62 time=3.0 ms
64 bytes from 10.0.5.150: icmp_seq=21 ttl=62 time=3.1 ms
64 bytes from 10.0.5.150: icmp_seq=22 ttl=62 time=2.5 ms
64 bytes from 10.0.5.150: icmp_seq=23 ttl=62 time=3.5 ms
64 bytes from 10.0.5.150: icmp_seq=24 ttl=62 time=3.3 ms
64 bytes from 10.0.5.150: icmp_seq=25 ttl=62 time=4.3 ms
64 bytes from 10.0.5.150: icmp_seq=26 ttl=62 time=1.9 ms
64 bytes from 10.0.5.150: icmp_seq=27 ttl=62 time=2.4 ms

--- 10.0.5.150 ping statistics ---
28 packets transmitted, 28 packets received, 0% packet loss
round-trip min/avg/max = 1.9/3.4/8.5 ms
packet loss happens only inside tun0 iface! That's very strange... Before some moment there was no such loss... I think it could happen after I swapped into new release of openwrt, but nevertheless, I want to understand where is the core of the problem lies!

I did tcpdump'ing of icmp packets from both sides and saw, that packets sent from rt53 to rt109 via tun0 are received by rt109, handled and sent back, but response is not received by rt53...

First I thought the problem is in MTU size, I changed it many times, but I discovered that allowed MTU size between my flats in provider's network is equal to 1500. Moreover, icmp packet is quite small one, much less than 1500 bytes.

This problem makes work of different services via tun0 very unattractive: sip-based voip works poor, file transfer is amazingly slow and so on. If you have some ideas, please, help me.

Thanks in advance.
 
Old 12-22-2007, 04:23 AM   #2
mw1j29353
LQ Newbie
 
Registered: Dec 2007
Posts: 2

Original Poster
Rep: Reputation: 0
I uploaded part of this snoop here.
 
  


Reply

Tags
vpn


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
GRE Tunnel Not Working doctorcisco Linux - Networking 2 04-19-2009 05:29 AM
Gre Tunnel Questions mhunter Linux - Networking 2 07-18-2007 12:17 PM
dhcp over GRE tunnel fadey Linux - Networking 0 04-29-2007 08:12 AM
gre tunnel problem sixone Linux - Networking 3 05-30-2005 11:57 AM
Question about a GRE Tunnel zerounu Linux - Networking 1 03-09-2004 09:04 AM


All times are GMT -5. The time now is 11:01 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration