Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Have a multi line file that contains four fields, the first field is a label and the other three are numbers. I need to find the percentage difference between field two and four and if it is greater then 90% print the label and if it is less then 90% ignore it and move onto the next line.
That sounds like an easy job for AWK. What have you tried so far, where are you stuck? Also, please show a few lines of sample input and a few lines of expected output.
The problem I am having is with keeping the name in sync with the numbers then doing the calculation and determining if it is > or < 90%. Here is what I have so far in the script:
#!/bin/bash
# check inodes of filesets
declare THRESHOLD=90
Key=`cat inode.txt | sed "1d" | awk '{print $1,$11,$12,$13}'`
for K in $Key
do
echo $K
done
for N in $Name
for M in $MaxInodes
for A in $AllocInodes
for U in $UsedInodes
and then thought something like
do
echo $N
echo $U / $A \* 100 | bc -l
if (( ${above_result} >= ${THRESHOLD} )) ; then
echo "fileset $N is close to allocated inodes"
else
echo "fileset $N is ok"
fi
done
The first problem I of course am having is the multiple 'for' statements. I then tried nesting them but then I just get the first value for $N i.e. root with all the values for say $U i.e. 4187, 5828699 & 11488908.
Key=`cat inode.txt | sed "1d" | awk '{print $1,$11,$12,$13}'`
This retains the spaces and newlines, but
Quote:
for K in $Key
splits into words according to $IFS.
You can improve it by modifying IFS to only a newline
Code:
oIFS=$IFS
IFS="
"
and later restore IFS with
Code:
IFS=$oIFS
But better use a while-read loop where the read defaults to a newline, and can assign fields to variables.
Code:
awk 'NR>1 {print $1,$11,$12,$13}' inode.txt |
while read Name MaxInodes AllocInodes UsedInodes
do
echo "do something with $MaxInodes and $UsedInodes e.g. $((UsedInodes * 100 / AllocInodes)) and print along with $Name"
done
The pipe might force the while loop into a subshell so you cannot set variables in the main shell.
The following while loop runs in the main shell:
Code:
while read Name MaxInodes AllocInodes UsedInodes
do
echo "do something with $MaxInodes and $UsedInodes $((UsedInodes * 100 / AllocInodes)) and print along with $Name"
done < <(awk 'NR>1 {print $1,$11,$12,$13}' inode.txt)
This construct is called a "process substitution".
First drop any unneeded fields. Just filter file and create new with only these fields required. Do by hand a little - look at coreutils - and I am sure you will solution. Bash automates tasks we can do by hands - but it is too tedious, error prone, or data are too large. But reworking small sample should definitely clarify situation. I am not posting ready solution - task is really very easy to do.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.