ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
And so now we're right back to the solution I first gave you. Eliminate the unimportant data first, and check what's left.
I really wish you would take a little time to learn how to use awk's addressing ability. You can eliminate all of your extraneous grep commands and your (now 2!) superfluous temporary files with a single expression. The patterns section of the grymoire link I gave you before breaks it down for you.
And since you are now interested in getting the number of individual entries, it's time for you to learn how to use bash's arrays. Store the output lines of awk in an array, rather than a file, and you have an instant count of how many there are. You also have each port available separately if you want to expand your script later with more features.
As a minor side point, since environment variables are generally all upper-case, it's good practice to keep your own user variables in lower-case or mixed-case, to help differentiate them.
Finally, in bash or ksh, it's recommended to use ((..)) for numerical tests, and [[..]] for string/file tests and other complex expressions. Avoid using the old [..] test unless you specifically need POSIX-style portability.
Here's my latest suggestion for you, with a little reorganization as well to eliminate duplicate code:
As for your last comments about the gui stuff, it may be interesting, but since it's a completely separate issue, you should take it up in a new thread if necessary.
And so now we're right back to the solution I first gave you. Eliminate the unimportant data first, and check what's left.
I really wish you would take a little time to learn how to use awk's addressing ability. You can eliminate all of your extraneous grep commands and your (now 2!) superfluous temporary files with a single expression. The patterns section of the grymoire link I gave you before breaks it down for you.
And since you are now interested in getting the number of individual entries, it's time for you to learn how to use bash's arrays. Store the output lines of awk in an array, rather than a file, and you have an instant count of how many there are. You also have each port available separately if you want to expand your script later with more features.
As a minor side point, since environment variables are generally all upper-case, it's good practice to keep your own user variables in lower-case or mixed-case, to help differentiate them.
Finally, in bash or ksh, it's recommended to use ((..)) for numerical tests, and [[..]] for string/file tests and other complex expressions. Avoid using the old [..] test unless you specifically need POSIX-style portability.
Here's my latest suggestion for you, with a little reorganization as well to eliminate duplicate code:
As for your last comments about the gui stuff, it may be interesting, but since it's a completely separate issue, you should take it up in a new thread if necessary.
Well @David, I have already added a kill button last night. Reason being that I Sometimes download new movies from thepiratebay. I am aware of the high random ports that will be opened. So I would rather turn it off then see an annoying pop up every minute. A member of another forum has made a suggestion against this though. Rather monitor ports like ssh, ftp, etc. Then have the UI launch if any high risk ports such as the listed, appear. I am going to add another button to kill all suspicious running processes.
I will have to take a crack at your post before hand though. Since you have incorporated arrays into your latest code. I think they would really be useful in this project. Since the UI will only get bigger, as well as the script. I will start tonight. That's my favorite time to code! With a four pack of monster, some American Dad/Adult Swim, and my laptop.
Off topic, whats your take on python versus awk versus perl? For jobs such as this.
You don't need to quote my entire posts each time, you know. In fact, it's generally better to leave out everything except what you're actually responding to. And even then include it only when the context isn't perfectly clear.
I don't really have much "take" on which language is best. As I said, awk could do it alone, but it's really more of a text-parsing language and not designed for general scripting. I've only just dipped my toes into perl & python a couple of times, and have no proficiency with them yet, but as full programming languages they are both certainly capable of doing a better job than a mixed bash/awk solution. I'd probably lean towards perl myself personally, but in the end it comes down to what you are familiar with.
I used my original one liner and piped the output into the lines array and removed the newlines with the -t switch. Apparently readarray is the equivalent of mapfile. I also read mapfile is not portable.
I dereferenced the array, then echoed the contents.
What I didn't understand was the use of the Asterisk with the array.
and why
this
Code:
< <(
couldn't be
this
Code:
< < ( or <<( or even << (
Sorry, I reverted to my old code. I feel more comfortable with it. It's not like I learned nothing from the code you provided. Also, Linux is all I use, and 90% of the time is spent coding and on the command line. I will eventually Learn all of this, sooner than later!
Why not now? Well, because this is not for a job, homework, or third party. It's just for fun and a hobby. Most importantly, I want to learn at a comfortable pace. A little at a time. I am also woring with NASM, C and GTK too.
So, I have an idea I would like to implement. I want to store each individual line into an array. Then, Check the array for individual ports. Then give the option to kill the process(es). Not individual, but as a whole. I want to pack all that into a button on the UI.
So, the inverted egrep search will leave the remaining unwanted ports. I would then, match the port numbers to the actual processes somehow. Then, run another parser on the pgrep/pidof command output.
that's all i have so far. Still haven't figure that out completely. this is going to take some thinking!
Ultimately I will use execl to kill them. like i did with the kill button.
I am fairly familiar with Awk. I don't like the look and feel of Perl though. I Just wanted to know what you thought though. Thanks.
I'll be back if I run into trouble. Thanks once again @David! Always there to help!
mapfile and readarray are one and the same, just synonyms for the same feature. And yes, it's a bash-only. To duplicate it in other shells you have to use a specifically-formatted while+read loop. It's detailed in the array link I gave you before.
Also explained is the difference between "*" and "@". "*" outputs the entire contents of the array as a single text string, with the fields delimited by the first character set in your IFS variable (space by default). Note that you need to double-quote the array variable (as always) to protect the output from being word-split afterwards.
"@", on the other hand, outputs each array entry as a separate word, and when you double-quote it, each word will act as if separately quoted. If not quoted, the output again gets split, and both forms become equivalent.
You'll learn when to use them with experience. As a rule of thumb, if you're just printing out the list as a whole, use "*", if you're using it in a for loop or printf or some other way that reads each entry in turn, use "@".
Process substitution is another built-in convenience feature. Think of it as creating a temporary file (actually a pipe (fifo)) that contains the output of the commands in it, which you can use just like you would any other file. The actual substitution syntax that creates the fifo is "<(..)". The second "<" is just the standard file redirection operator and not part of the P.S. itself. The space between them is required for some technical reason; most likely the P.S. needs to be parsed as a completely separate token.
Quote:
So, I have an idea I would like to implement. I want to store each individual line into an array. Then, Check the array for individual ports. Then give the option to kill the process(es). Not individual, but as a whole. I want to pack all that into a button on the UI.
So, the inverted egrep search will leave the remaining unwanted ports. I would then, match the port numbers to the actual processes somehow. Then, run another parser on the pgrep/pidof command output.
The way I would do it would be to run a loop on the array, and use a simple case statement to check each port in turn for matches. If you need to grab a subset of those, just add them to another new array, which you can then use for further processing.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.