Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi. I am relatively new to Linux, but currently i am working on my thesis that requires some measurements and experiments of Linux performance while using hundreds (even thousands) virtual interfaces. My current task is to find out... "1. How long does it take for kernel to (1) start and (2) shutdown a virtual interface? For that purpose, I would write a script that sequentially starts and shutdowns, say, 1000 virtual interfaces and observe the start/shutdown durations. Among these 1000 samples, I'd calculate the min/max/avg/stddev. I'd repeat the same experiment, but instead of sequentially spawning interfaces, this time I'd spawn them using 2, 3, 4, and so on threads. This way, I'd be able to observe the kernel's performance under concurrent load."
I wrote a simple script that starts N interfaces and assigns IP address to them:
Code:
echo "Enter number of interfaces: "
read int
a=8 #I use 10.0.0.0/20 network, but here i start from 10.0.8.0
b=2 #Physical interface has address 10.0.8.1, so virtual starts from 10.0.8.2
for (( i=1; i<=$int; i++ ))
do
ifconfig eth2:$i 10.0.$a.$b/20
if [ $b -eq 255 ] #incrementing third octet of the IP address when forth reaches 255
then ((a++))
b=0
else (( b++ ))
fi
done
Similar script stops all the interfaces when needed with the following code:
Code:
ifconfig eth2:$i down
So, my questions are:
1. How to measure this:
Quote:
sequentially starts and shutdowns, say, 1000 virtual interfaces and observe the start/shutdown durations
2. How to do this part:
Quote:
but instead of sequentially spawning interfaces, this time I'd spawn them using 2, 3, 4, and so on threads
3. What tools, techniques to use to observe/measure this:
Quote:
...to observe the kernel's performance under concurrent load.
Remember a script is interpreted, and running commands take time. So to get more sensible numbers, you need to run your script again where the ifconfig comand does nothing. You could change the ifconfig command to "ifconfig >/dev/null" and compare the results.
You can't use proper threads in a script. But adding & after the ifconfig command will start it in the background. After the loop is finished add a "wait" command, so it waits until all are done.
To measure things, you can use the command "time". If you want to time many commands, you can add () around a section to measure
Remember a script is interpreted, and running commands take time. So to get more sensible numbers, you need to run your script again where the ifconfig command does nothing. You could change the ifconfig command to "ifconfig >/dev/null" and compare the results.
Thank you. Yes, this was my next question: whether it makes sense to measure how much time it takes for the script to simply iterate N times and exclude this time from the time needed for the script to create/delete N virtual interfaces.
Another my question is how precise is time command. It provides time in milliseconds, however, if it gives a result of 0.001, is it literally 1 millisecond or it is a rounded value?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.