Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am working as a TPL on a Large network upgrade project.
To ensure that all these upgrade will improve the network I want to baseline before and after to use as hard evidence of our success (or failure)
I feel the best way to baseline would be to use Linux.
This is the chain of events I would like to script.
1) Create series of random data test files (different sizes)
2) Mount remove SMB volume
3) Copy files to mount point & copy set of files from mount point
4) Output results and time to complete (per file and total) to CSV file
5) Delete local test files
6) Create series of random data test files (different sizes)
7) Copy test files using HTTP to server
8) Copy test files using HTTP from server
9) Output results and time to complete (per file and total) to CSV file
10) use iperf to check throughput, errors etc.
11)Output results and time to complete (per file and total) to CSV file
I would like to create a script that is idiot proof (just run the script in the background)
I understand it is a lot to ask (especially as I have almost no scripting experience ), but if any one has a basic idea on how to create this it would be greatly appreciated.
I am pretty sure I can find the required individual command by tying it al together into a script is my challenge.
Any help / suggestions will be greatly appreciated.
I am working as a TPL on a Large network upgrade project.
Technical Project Leader? Or Third Party Liability? ;-p
Quote:
Originally Posted by jwouter
To ensure that all these upgrade will improve the network I want to baseline before and after to use as hard evidence of our success (or failure)
It would make sense to know what kind of upgrades are made and what their projected (area of) improvement would be? Network throughput measurement for a single host is just one aspect. And unless network performance focuses solely on (or should be measured solely for) specific protocols I'd do away with the Samba share and HTTP tests as iPerf is quite capable itself if configured right.
Quote:
Originally Posted by jwouter
(..) tying it al together into a script is my challenge.
Then it would help if you post the individual commands so we actually have something to tie together.
Quote:
Originally Posted by jwouter
I would like to create a script that is idiot proof
Apart from some pitfalls error handling can often be written out as a set of decision trees:
Code:
[ 00: network connectivity ] - no: display diagnostics, exit. <-------
| / |
| / |
V / |
[ 01: mount remote share ] / unknown (like remote iperf daemon unavailable)
\ /
\ / but result below expected average
yes: test throughput-- OK <
| and avg on par with or above expectations
|
V
[ 02: next test ]
If you're going to generate your own test files, do that once upfront and store them somewhere handy.
Don't do it during the test, eliminates a variable. Also means you can reuse them later for other test situations.
What's the problem? Are you worried about testing the http server? I thought the goal was to test network transfer performance. Bonnie++ transfers lots of small flles.
I have no idea what a TPL is, and even performing a web search doesn't give something too conclusive.
However, you're upgrading the network and you wish to stage tests to baseline the current network and then use it to check as upgrades occur. You want to create an idiot proof script and just run it in the background. The script is going to generate random test files and solely use HTTP to pass those file to and from the server.
Q: Are you OK with regenerating test data files each time you do this? Or would the benchmark be better measured using the same data each test iteration? Or random test data with uniform sizes?
Q: Does this network pass other data besides HTTP? Most networks do, hence why I'm asking, and thinking that in a complex network, one piece of equipment, configured wrong, may hinder the passage of one protocol and let another protocol pass fine. Something to think about.
Suggestions for learning how to script:
- Read up on BASH scripting, http://tldp.org/LDP/abs/html/
- Put debug into your scripts to assist you in diagnosing it, set -xv is a good starting step
- As in any programming, check return codes, check variables, debug output this information to a file,
segment functionality - you can have functions in a shell script, and you can invoke other scripts from within your
script, create extra variables for debug of interim results until you've completed a section.
- Test your scripts on other user's logins and on other systems. Learn how to deal with variety in environments, or figure
how to establish a common environment, if you require it on those different systems.
- Nothing runs in the background, it will merely be a different process which has the same priority as your shell unless
you play around with process priority - which I'm not sure you can do from within a shell; you probably can though.
I think it's a good project which will result in you learning a great deal about scripting.
When you have sections coded and are having trouble, ask further questions. Right now, no one can really give you good assistance because the problem is large in scope, the specifics of the network under test, the system names, IP address scheme, and so forth, are not known to people on this forum, so it's probably better to code some and when you run into problems, ask more specific questions.
Thanks a lot for all the tips and suggestions, it is greatly appreciated.
Firs of all TPL stand for technical project lead , I think the term is used in both prince2 and PMP certifications
It basically means the person responsible for the technical delivery of the project.
Where the PM will get all the glory , the TPL will be sweating over configuration, design, script etc.
A little on the upgrade itself
We are currently running a medium/ large network 5 sites , 5000 users. The uplinks are 1gb, users access is 100mb
We are upgrading all equipment to support 10gb uplinks and 1 gb user access. We are also introducing al lot of new security relater product like cisco ISE/NAC, Cisco IPS and Cisco ASA for VPN access.
Currently a lot of the switch configurations are inconsistent (especially in port speed /duplex settings) leading to performance issues.
This project is to take a way all these issues and upgrade all hardware to support the new required network speeds.
As requested I have come up with the following command se
Make directory and mount windows 2008 share
mkdir /mnt/cifs
mount -t cifs //bny-s-888.nlng.net/NIP /mnt/cifs -o username=***,password=***,domain=***
Copy files to the windows 2008 share and output results to txt file
/usr/bin/time -p -o /home/report.txt rsync --stats -t /home/file*.test /mnt/cifs >> /home/report.txt
More commands will following but I think the principle of adding new commands to the script will be the same.
I think I will be able to find out how to run one command after the other called form a script (only if the pervious command is successful), but feel free to post a small example.
My main challenge is actually getting all the data (more commands to follow) in a nice readable report.
What command line tool would you recommend to automatically format and merge multiple files in to one ?
Thanks again for all you advise and help and I will post more info later today as I progress.
I agree with chrism01 that you won't need to delete and recreate the test files.
If you check tests in the BASH guide you'll see that $? means the last result code. In other words, if you code one of those dd calls and want to know what it returned, then perform an if test on $?
Result will be that you will get a new file foo.dat which is a copy of temp.data and ZERO, shown as "0" will be echoed to your console. You can of course test that instead:
The reason for recreation the test files is that we have Riverbed steelhead Wan optimizers on our WAN links
If I reuse the files, they will be 'cached' on the riverbed appliance and we will not be testing true end to end speeds.
All though I can exclude the traffic from being optimized on the riverbed, I felt that just recreating an new set of files is easier and less error prone
Hope this clarifies the need for new files for every test.
This is my first draft of the script
As expected it is not working and I am getting the following errors:
Code:
: command not found
: invalid option namehopt: nounset
: command not found
: command not found:
: command not found:
basline.sh: line 92: syntax error: unexpected end of file
I am sure I making some newbie mistakes, so please help me understand how to execute the command one after an other...
Thanks again for the help.
Code:
#!/bin/bash
#
# baseline end to end network speeds
#
# Joli-coeur Wouter
shopt -s -o nounset
# global Declarations
declare -rx SCRIPT=${0##*/}
declare -rx time="/usr/bin/time"
declare -rx rsync="/usr/bin/rcync"
declare -rx dd="/bin/dd"
declare -rx rm="/bin/rm"
declare -rx mount="/bin/mount"
declare -rx mkdir="/bin/mkdir"
# Sanity check
if test -z "$BASH" ; then
printf "SCRIPT:LINENO: please run this script with the BASH shell\n" >&2
exit 192
fi
if test ! -x "$time" ; then
printf "SCRIPT:LINENO: the command $times is not availible - \
aborting\n " >&2
exit 192
fi
if test ! -x "$rsync" ; then
printf "SCRIPT:LINENO: the command $rsync is not availible - \
aborting\n " >&2
exit 192
fi
if test ! -x "$dd" ; then
printf "SCRIPT:LINENO: the command $dd is not availible - \
aborting\n " >&2
exit 192
fi
if test ! -x "$rm" ; then
printf "SCRIPT:LINENO: the command $rm is not availible - \
aborting\n " >&2
exit 192
fi
fi
if test ! -x "$mount" ; then
printf "SCRIPT:LINENO: the command $mount is not availible - \
aborting\n " >&2
exit 192
fi
if test ! -x "$mkdir" ; then
printf "SCRIPT:LINENO: the command $mkdir is not availible - \
aborting\n " >&2
exit 192
fi
# Main Script
dd if=/dev/urandom of=/home/file001.test count=2K
dd if=/dev/urandom of=/home/file002.test count=4K
dd if=/dev/urandom of=/home/file003.test count=10K
dd if=/dev/urandom of=/home/file004.test count=20K
dd if=/dev/urandom of=/home/file005.test count=100K
dd if=/dev/urandom of=/home/file006.test count=200K
dd if=/dev/urandom of=/home/file007.test count=500K
mkdir /mnt/cifs
mount -t cifs //bny-s-888.nlng.net/NIP /mnt/cifs -o username=*,password=*,domain=*
time -p -o /home/report.txt rsync --stats -t /home/file*.test /mnt/cifs >> /home/report.txt
rm -f /home/file*.test
dd if=/dev/urandom of=/home/file001.test count=2K
dd if=/dev/urandom of=/home/file002.test count=4K
dd if=/dev/urandom of=/home/file003.test count=10K
dd if=/dev/urandom of=/home/file004.test count=20K
dd if=/dev/urandom of=/home/file005.test count=100K
dd if=/dev/urandom of=/home/file006.test count=200K
dd if=/dev/urandom of=/home/file007.test count=500K
# Clean up
exit 0
Please place your code in [code][/code] tags to improve readability.
I have only had a quick look at the code, but my first thought would be, you are hard coding paths to commands ... are you sure this is the default for all machines you will be running this script from?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.