LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices



Reply
 
Search this Thread
Old 10-15-2008, 06:12 AM   #1
dorian33
Member
 
Registered: Jan 2003
Location: Poland, Warsaw
Distribution: LFS, Gentoo
Posts: 587

Rep: Reputation: 32
Incoming traffic prioritize


I have a router with 4 interfaces.

The 'ip link show' command returns:

1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

So for outgoing traffic the pfifo_fast queues are used.
It fits my needs - setting TOS to 8 I give some outgoing packets the highest priority (which means they will be sent first).

But I am wondering what way to ensure that all incoming packets with let's say TOS=8 are processed first?

It it possible?

Additional questions are:
What kind of queue (for given interface) is used for ingress traffic ?
What is the command to list this (i.e. ingress) queues?

Any help will be appreciated.
 
Old 10-15-2008, 11:41 PM   #2
zmanea
Member
 
Registered: Sep 2003
Location: Colorado
Posts: 85

Rep: Reputation: 15
Take a look at tc.

http://linux.die.net/man/8/tc
 
Old 10-16-2008, 03:28 AM   #3
dorian33
Member
 
Registered: Jan 2003
Location: Poland, Warsaw
Distribution: LFS, Gentoo
Posts: 587

Original Poster
Rep: Reputation: 32
I've read tc and related manuals & docs many times.
All of them only touch only a little the matters of incoming traffic (RED, GRED, etc)
But this is not I am looking for.

My basic question was: what kind of queue is used for ingress traffic.

I mean: which is the default queue when no tc is used.

My current imagination (after digging a lot) is that there is NO ingress queue.
But I cannot find the CLEAR statement "there is no ingress queue" or "all incoming packets are processed immediately"
 
Old 10-19-2008, 05:00 PM   #4
plpl303
Member
 
Registered: Oct 2008
Posts: 31

Rep: Reputation: 15
If I recall correctly, there is no queue on incoming traffic since you can't shape an interface you don't own (ie, the remote system can throttle down the amount of data it's sending but the local system can't).

But if your router is forwarding packets between interfaces, it could give higher priority on the egress side to those packets that you consider more important. Will this do what you want?

Something like this:
http://lartc.org/howto/lartc.qdisc.h....QDISC.EXPLAIN
 
Old 10-25-2008, 06:47 AM   #5
dorian33
Member
 
Registered: Jan 2003
Location: Poland, Warsaw
Distribution: LFS, Gentoo
Posts: 587

Original Poster
Rep: Reputation: 32
Thanks for reply. Looks like you are right.

But on the second hand since I've programmed a number of specific embedded system communications (for real-time working devices with no OS platform) I know that the ratio of the CPU time consumed for receiving incoming data to the time of data interpretation can be around 1:10 or even 1:100.

Therefore I am very surprised that there is no default ingress queue in Linux.

It means that if I have a slow machine but with fast interface and the incoming traffic is irregular (momentary is very high but normally is not) the packets have to be dropped since CPU is not able to process all of them.

Of course due to the tcp nature not accepted ones will be retransmitted but for me it is mindless to work that way. First of all it it waste of the bandwidth.

And as far as udp packets - they will be irretrievably lost.

Therefore I posted the question.


BTW: you wrote 'there is no queue on incoming traffic since you can't shape an interface'.
I think you mean 'you can't shape an interface since there is no queue on incoming traffic'.
Right?
 
Old 10-25-2008, 11:38 AM   #6
plpl303
Member
 
Registered: Oct 2008
Posts: 31

Rep: Reputation: 15
Quote:
Originally Posted by dorian33 View Post
BTW: you wrote 'there is no queue on incoming traffic since you can't shape an interface'.
I think you mean 'you can't shape an interface since there is no queue on incoming traffic'.
Right?
Well, I could have phrased that better. The incoming packets are indeed queued, but the only way to shape them would be to tell the remote host's interface to stop sending them. Since that's not directly possible (the local kernel can't set an egress queue policy on a remote system's kernel, of course), the only alternative is to drop the packets and rely, as you mentioned, on TCP's congestion detection algorithm. Which is an issue for connectionless protocols.

Quote:
It means that if I have a slow machine but with fast interface and the incoming traffic is irregular (momentary is very high but normally is not) the packets have to be dropped since CPU is not able to process all of them.
But if you are doing traffic shaping on the incoming traffic, that will only increase the CPU's load more -- it will need to receive the packets, move them from the NIC to a buffer, and then analyze the packets to decide whether to drop or pass them per the tc policy. If the CPU cannot even keep up with dequeuing the packets from the NIC and making routing decisions, it will not have any cycles left over for priority analysis.

However, /proc/sys/net/core/netdev_max_backlog tells how many packets will be queued if a burst arrives. The default is 1000:

http://www.linuxinsight.com/proc_sys...x_backlog.html

So this should address the "bursty traffic" scenario.
 
Old 10-25-2008, 02:39 PM   #7
dorian33
Member
 
Registered: Jan 2003
Location: Poland, Warsaw
Distribution: LFS, Gentoo
Posts: 587

Original Poster
Rep: Reputation: 32
Hi plpl303

Please note that for me the term 'traffic shaping' is not the same as 'traffic prioritize'.

The first one concerns the bandwidth control (achieved usually by sophisticated way of packet dropping) which additionally includes also 'traffic prioritize' possibilities.

And the 'traffic prioritize' is the only packet order changing.

This is done in "natural way" with pfifo_fast queue for outgoing traffic.

The subject of my post was 'Incoming traffic prioritize' since regarding to my observations sometimes I have the following situation on my router.
One of the interfaces receives a lot of packets concerned with a kind of local bulk transfer (the packets are probably queued somewhere since they are not repeated and the transfer is ok) and another interface gets (in the meantime) only 1 small packet which should be delivered as soon as possible (the TOS is set to minimum delay)
This way the incoming buffer is full of "less important" packets with one "urgent" one somewhere between them.

And there is no chances force kernel to handle such urgent packet first.
As a result sometimes I have a problem with VoIP. (Hearing problem).

Therefore I am looking for a way to use something like pfifo_fast queue on input.

I can imagine the following scenario for such queue:
1. The incoming buffer which has enough size (i.e. I can change the size of this buffer) to collect all the "momentary high traffic".
2. The incoming buffer is pfifo_fast queue rather than fifo (which as I understand is implemented)
3. When the packet arrives it is PARTIALLY analyzed i.e. only TOS is used for the purpose of proper band queuing.

I believe this method would be enough fast (almost same as raw buffering in fifo) to catch all the incoming traffic and what is more it would provide the most important feature for me - incoming traffic prioritize.

I hope you see now what was the intention of all my queries.

Regarding netdev_max_backlog - what does it mean 'Maximum number of packets' term used in definition? The clear definition (for me) should use 'bytes' rather than 'packet' measure since packets are not fixed-length.
Does it mean the queue use 'netdev_max_backlog' x MTU bytes?
 
Old 10-26-2008, 12:06 AM   #8
plpl303
Member
 
Registered: Oct 2008
Posts: 31

Rep: Reputation: 15
Ah, it makes sense! I see what you are saying! (Sorry I didn't see it sooner... in retrospect, I should have understood it before).

There *is* an ingress queueing discipline accessible via tc [1]. And iptables can be used to mark TOS fields [2] if they aren't already so marked (but I gather that your application does that already).

But it looks like pfifo_fast already classes "minimum delay" packets into band 0 so they always get dequeued first [3]. This would seem to do what you want. I would surmise that it works on incoming queues as well as outgoing queues.


[1] http://lartc.org/howto/lartc.adv-qdisc.ingress.html

[2] http://lartc.org/howto/lartc.cookboo...tive-prio.html

[3] http://lartc.org/howto/lartc.qdisc.c...ss.html#AEN659
 
Old 10-26-2008, 06:44 AM   #9
dorian33
Member
 
Registered: Jan 2003
Location: Poland, Warsaw
Distribution: LFS, Gentoo
Posts: 587

Original Poster
Rep: Reputation: 32
But according to the http://lartc.org/howto/lartc.adv-qdisc.ingress.html:
Quote:
Each interface however can also have an ingress qdisc
For me the statement "can also have" doesn't mean "always have"

The second confusing thing is that I need to use "attach" it with 'tc qdisc add dev eth0 ingress' command to see it.
'add' is not the same as 'attach' so for me this command CREATES something extra rather than just allow to see.

The question is: if the interface "can have" ingress qdisc which I need to "attach" - does it mean that without issuing the command the interface has no qdisc?

And why without any tc usage 'ip link show' command reports only egress qdiscs? Does it mean the ingress qdisc doesn't exists or is just not shown??


The next step of my problem solution is the fact that my router is equipped with 4 interfaces.
Assuming that I am able to prioritize incoming packets for each interface separately the BIG QUESTION is:
what way to prioritize packets when bulk transfer comes from one interface and the urgent packet arrives to another?

Is IMQ the only solution? Or maybe ingress qdisc can be common for all the interfaces?
Where can I find the information how several incoming queues are handled?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
monitoring the incoming traffic narendra.pant Linux - Networking 1 08-11-2006 05:21 PM
Logging All Incoming / Outbound Traffic technick Linux - Security 1 10-24-2005 03:32 PM
prioritize (inversely) P2P traffic. eantoranz Linux - Networking 0 08-11-2005 10:26 AM
Allow Incoming Traffic clarence1720 Mandriva 15 12-07-2004 12:26 AM
IP tables / squid incoming traffic xilace Linux - Software 5 10-25-2004 02:38 PM


All times are GMT -5. The time now is 03:29 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration