Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am currently trying to benchmark various Linux server systems for a class project. The systems I am using are Ubunut 12.04, SUSE 11 and CentOS7. I am by no means an expert but I have built a simple dialog menu box so far to manage the options. However, my tutor is now asking me to take a look at using either perl or c++. Is there any particular need to do this? Does either language help performance or the server system? I have asked my tutor but she is not sure and the information online is very vague about why to use either. Does anyone have any ideas about good uses for either language within a bash script or for capturing statistics?
I do not really understand it. C++ cannot be used within bash (or at least I have never seen that), you need to select this one or that one. perl can be embedded into bash, but again if I need perl to solve a problem I won't use it inside bash but will implement the solution in perl.
As a benchmark I can imagine you will implement something in perl, bash and C++ too an check performance....
BASH aint a programming language , it is a shell or command processor , basically all commands like cp , mv sed,cat,cut,awk,echo, .. etc. are programs and BASH call it via text windows or script files.
For this reason , you can easily get wrong execution timing values running for example an instruction like
Quote:
time cat /home/pi/index.html | grep Product
real 0m0.033s
user 0m0.000s
sys 0m0.020s
same result but much faster
Quote:
time grep Product /home/pi/index.html
real 0m0.018s
user 0m0.010s
sys 0m0.000s
pi@raspberrypi /var/www $
For this reason , benchmarking results deppend soley by your bash skills , and it is not a good practice.
However , can easily benchmark server drives read/write speed using hdpard / dd , easily can measure sensors temperature while test burning CPU , can do many automated tasks , this is what BASH is really good for
For fun purposes i did wrote a pseudo parallel script for word processing (about 15k shop products )by splitting task in how many cores CPU got and run them in the same time. It is fast and furious, but it is not programming ,it is just a collection of programs put into a logical sequence
Actually, bash, Perl and C++ are all programming langs (Turing Complete).
Note that bash does have some 'internal' cmds like 'cd' and it can also call external (standalone) binaries such as awk, sed etc AND Perl progs AND C++ executables etc...
This is basically also true of Perl and C++.
Bash itself is interpreted, so this makes it relatively slow.
Perl is 'compiled-on-the-fly' http://www.perl.com/doc/FMTEYEWTK/comp-vs-interp.html, which makes it very quick.
With C/C++, you get a pre-compiled+linked binary executable, which (all things being equal and no external cmds being called) should be a bit quicker than Perl (under the same rules).
You can find plenty of benchmarking pages on the web, but comparing across different langs is very tricky to do correctly - for accuracy you'd probably end up with very simple programs.
Thanks for the advice above. I may avoid C++ and perl for now because the objective is to do this with as low a resource hit as possible, preferably using bash.
My script already allows for file permission changes, adding users, file searching and running of statistics. My next objectives are to stress the CPU,I/O,Memory etc in order to see which one has the smallest resource hit. Is it possible to incorporate these stress tests into bash or to run them at the same time as iostat etc? Are there any other things that may need stress tested apart from the above mentioned? Can these be graphically output using Gnuplot or something and called within bash rather than needing huge graphical programs to be downloaded?
I am from a Java background mainly and was hoping to manage and schedule certain services/processes which I have named "oxygen", "temperature" etc in the same way that threads can be in Java. Is it possible to manage these within a bash script? What I am hoping to do is to be able to assign a specific amount of CPU overall and cpu needed for controlling the processes. When CPU is low, those processes that are not so urgent can be suspended or terminated and make this another test as to which server distro handles it best.
Is this all possible within Linux and would anyone have any advice on how to do so?
I think you need to decide what you are actually testing.
The known order of speed (from slowest to fastest)
1) bash - it is very slow, high overhead in interpretation of each line
2) perl/python - both are quite fast once the compilation phase is done (they compile into a VM which is interpreted). Python actually a bit of advantage since its "compiled code" can be saved and treated as a binary image (much like the Java VM treats its files).
3) c++ - it compiles into a binary image which can run quite fast
4) c - it too compiles into a binary image, but it is also easier to focus on performance issues
Checking CPU/memory/IO usage is only testing the particular application, and different applications (no matter what the language used) will test differently.
I am not benchmarking the use of bash vs c++ etc, I am merely wanting to schedule some hypothetical processes in bash and to see which one of my distros handles the best under a stress test whilst running iostat etc. It is just for a simple project, nothing majorly scientific. I was just wondering if there was any need to use c++ etc in case it has some features that might help but from the sounds of the responses and information online, there is no need to. Is it possible within bash scripts to manage these processes as I explained above and to output stress test results in graphs that can be called in bash?
Actually, bash is generally used to manage processes which are usually written in a faster lang eg Perl, C etc.
If you want to generate graphs you'll need a tool (probably written in C) to do it, although you may be able to generate the data in bash and then use (binary) tool to chart/display them.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.