LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (https://www.linuxquestions.org/questions/linux-networking-3/)
-   -   Incoming traffic prioritize (https://www.linuxquestions.org/questions/linux-networking-3/incoming-traffic-prioritize-676457/)

dorian33 10-15-2008 05:12 AM

Incoming traffic prioritize
 
I have a router with 4 interfaces.

The 'ip link show' command returns:

1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

So for outgoing traffic the pfifo_fast queues are used.
It fits my needs - setting TOS to 8 I give some outgoing packets the highest priority (which means they will be sent first).

But I am wondering what way to ensure that all incoming packets with let's say TOS=8 are processed first?

It it possible?

Additional questions are:
What kind of queue (for given interface) is used for ingress traffic ?
What is the command to list this (i.e. ingress) queues?

Any help will be appreciated.

zmanea 10-15-2008 10:41 PM

Take a look at tc.

http://linux.die.net/man/8/tc

dorian33 10-16-2008 02:28 AM

I've read tc and related manuals & docs many times.
All of them only touch only a little the matters of incoming traffic (RED, GRED, etc)
But this is not I am looking for.

My basic question was: what kind of queue is used for ingress traffic.

I mean: which is the default queue when no tc is used.

My current imagination (after digging a lot) is that there is NO ingress queue.
But I cannot find the CLEAR statement "there is no ingress queue" or "all incoming packets are processed immediately"

plpl303 10-19-2008 04:00 PM

If I recall correctly, there is no queue on incoming traffic since you can't shape an interface you don't own (ie, the remote system can throttle down the amount of data it's sending but the local system can't).

But if your router is forwarding packets between interfaces, it could give higher priority on the egress side to those packets that you consider more important. Will this do what you want?

Something like this:
http://lartc.org/howto/lartc.qdisc.h....QDISC.EXPLAIN

dorian33 10-25-2008 05:47 AM

Thanks for reply. Looks like you are right.

But on the second hand since I've programmed a number of specific embedded system communications (for real-time working devices with no OS platform) I know that the ratio of the CPU time consumed for receiving incoming data to the time of data interpretation can be around 1:10 or even 1:100.

Therefore I am very surprised that there is no default ingress queue in Linux.

It means that if I have a slow machine but with fast interface and the incoming traffic is irregular (momentary is very high but normally is not) the packets have to be dropped since CPU is not able to process all of them.

Of course due to the tcp nature not accepted ones will be retransmitted but for me it is mindless to work that way. First of all it it waste of the bandwidth.

And as far as udp packets - they will be irretrievably lost.

Therefore I posted the question.


BTW: you wrote 'there is no queue on incoming traffic since you can't shape an interface'.
I think you mean 'you can't shape an interface since there is no queue on incoming traffic'.
Right?

plpl303 10-25-2008 10:38 AM

Quote:

Originally Posted by dorian33 (Post 3321548)
BTW: you wrote 'there is no queue on incoming traffic since you can't shape an interface'.
I think you mean 'you can't shape an interface since there is no queue on incoming traffic'.
Right?

Well, I could have phrased that better. The incoming packets are indeed queued, but the only way to shape them would be to tell the remote host's interface to stop sending them. Since that's not directly possible (the local kernel can't set an egress queue policy on a remote system's kernel, of course), the only alternative is to drop the packets and rely, as you mentioned, on TCP's congestion detection algorithm. Which is an issue for connectionless protocols.

Quote:

It means that if I have a slow machine but with fast interface and the incoming traffic is irregular (momentary is very high but normally is not) the packets have to be dropped since CPU is not able to process all of them.
But if you are doing traffic shaping on the incoming traffic, that will only increase the CPU's load more -- it will need to receive the packets, move them from the NIC to a buffer, and then analyze the packets to decide whether to drop or pass them per the tc policy. If the CPU cannot even keep up with dequeuing the packets from the NIC and making routing decisions, it will not have any cycles left over for priority analysis.

However, /proc/sys/net/core/netdev_max_backlog tells how many packets will be queued if a burst arrives. The default is 1000:

http://www.linuxinsight.com/proc_sys...x_backlog.html

So this should address the "bursty traffic" scenario.

dorian33 10-25-2008 01:39 PM

Hi plpl303

Please note that for me the term 'traffic shaping' is not the same as 'traffic prioritize'.

The first one concerns the bandwidth control (achieved usually by sophisticated way of packet dropping) which additionally includes also 'traffic prioritize' possibilities.

And the 'traffic prioritize' is the only packet order changing.

This is done in "natural way" with pfifo_fast queue for outgoing traffic.

The subject of my post was 'Incoming traffic prioritize' since regarding to my observations sometimes I have the following situation on my router.
One of the interfaces receives a lot of packets concerned with a kind of local bulk transfer (the packets are probably queued somewhere since they are not repeated and the transfer is ok) and another interface gets (in the meantime) only 1 small packet which should be delivered as soon as possible (the TOS is set to minimum delay)
This way the incoming buffer is full of "less important" packets with one "urgent" one somewhere between them.

And there is no chances force kernel to handle such urgent packet first.
As a result sometimes I have a problem with VoIP. (Hearing problem).

Therefore I am looking for a way to use something like pfifo_fast queue on input.

I can imagine the following scenario for such queue:
1. The incoming buffer which has enough size (i.e. I can change the size of this buffer) to collect all the "momentary high traffic".
2. The incoming buffer is pfifo_fast queue rather than fifo (which as I understand is implemented)
3. When the packet arrives it is PARTIALLY analyzed i.e. only TOS is used for the purpose of proper band queuing.

I believe this method would be enough fast (almost same as raw buffering in fifo) to catch all the incoming traffic and what is more it would provide the most important feature for me - incoming traffic prioritize.

I hope you see now what was the intention of all my queries.

Regarding netdev_max_backlog - what does it mean 'Maximum number of packets' term used in definition? The clear definition (for me) should use 'bytes' rather than 'packet' measure since packets are not fixed-length.
Does it mean the queue use 'netdev_max_backlog' x MTU bytes?

plpl303 10-25-2008 11:06 PM

Ah, it makes sense! I see what you are saying! (Sorry I didn't see it sooner... in retrospect, I should have understood it before).

There *is* an ingress queueing discipline accessible via tc [1]. And iptables can be used to mark TOS fields [2] if they aren't already so marked (but I gather that your application does that already).

But it looks like pfifo_fast already classes "minimum delay" packets into band 0 so they always get dequeued first [3]. This would seem to do what you want. I would surmise that it works on incoming queues as well as outgoing queues.


[1] http://lartc.org/howto/lartc.adv-qdisc.ingress.html

[2] http://lartc.org/howto/lartc.cookboo...tive-prio.html

[3] http://lartc.org/howto/lartc.qdisc.c...ss.html#AEN659

dorian33 10-26-2008 05:44 AM

But according to the http://lartc.org/howto/lartc.adv-qdisc.ingress.html:
Quote:

Each interface however can also have an ingress qdisc
For me the statement "can also have" doesn't mean "always have"

The second confusing thing is that I need to use "attach" it with 'tc qdisc add dev eth0 ingress' command to see it.
'add' is not the same as 'attach' so for me this command CREATES something extra rather than just allow to see.

The question is: if the interface "can have" ingress qdisc which I need to "attach" - does it mean that without issuing the command the interface has no qdisc?

And why without any tc usage 'ip link show' command reports only egress qdiscs? Does it mean the ingress qdisc doesn't exists or is just not shown??


The next step of my problem solution is the fact that my router is equipped with 4 interfaces.
Assuming that I am able to prioritize incoming packets for each interface separately the BIG QUESTION is:
what way to prioritize packets when bulk transfer comes from one interface and the urgent packet arrives to another?

Is IMQ the only solution? Or maybe ingress qdisc can be common for all the interfaces?
Where can I find the information how several incoming queues are handled?


All times are GMT -5. The time now is 06:38 PM.