Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ive created a script and called it "webchecker"
Ive placed the script into a Bash Script and the results were as follows:
Code:
if /usr/bin/true; then
echo "OK"
else
echo "Not OK"
fi
#
if /usr/bin/false; then
echo "OK"
else
echo "Not OK"
fi
#
if curl --silent --head $url/$file | grep -q -c 1 -P '^HTTP/\w\.\w\s200\sOK'; then
echo "OK"
else
echo "Try later"
fi
Results were:
Quote:
./webchecker: line 205: /usr/bin/true: No such file or directory
Not OK
./webchecker: line 211: /usr/bin/false: No such file or directory
Not OK
grep: ^HTTP/\w\.\w\s200\sOK: No such file or directory
(23) Failed writing body
Try later
I have tried this on known working URLs and Filenames as well as bogus filenames (same URL) and the results were the same ....
Any suggestions to have this operate without the need to download the file all the time.
Shouldn't make too much difference - this is running on a Pi
You've posted only an excerpt from the shell script. According to the error messages there are hundreds of lines. Start small. Make a small separate script and then when you have that working port it into the larger monstrosity. Note that the paths might be different on various distros...
pan64 = that appears to download the file rather than checking if it’s available... is there an alternative to downloading and check via a curl command or result (potentially in the header)?
why curl? There are other tools (wget was mentioned, but python/perl/whatever are ok too) which can handle this much better.
The code in the previous post was the only thing contained in the script.... except for bin/bash
After bashing the keyboard, I have this working (only creating a small file for comparison use)
Code:
curl -sIo check $url/$file
compare=$(grep "404" "check")
if [[ -n $compare ]]; then
echo "NOPE = $(grep "404" "check")"
echo "File NOT here"
else
echo "YEES! = $(grep "200" "check")"
echo "File IS here!"
fi
A touch rough - but seems to do the trick without downloading the entire file (just the header), which will give the option to add a command in the relevant IF section of the script if desired.
I was simply having an attempt at some scripting, rather than be “one of those people” that simply asks someone else to do all the work for them.
Yes, I am aware that -f will see if the file is available on the local disc, maybe it was not the best to use that “-f” statement given the request to see if the file is simply available on the website. I was interested to see where the scripting in #5 would best go to give it a go.
The issue that I am seeing is that the script could download a error message imposed as $file which the system will see as a Pass.
Classic example was this morning, I ran a script thinking it was downloading the file, yet when I explored deeper, the file looked like the mp3 file, but the message below: cat $file
The -f resulted in a “Yes, the file was downloaded”, when it clearly was not the audio file. I am now having to add an external conditional statement in the script to ensure the file is larger than, say, 1Mb. If the file is downloaded and larger than 1Mb, then “Success, the file was downloaded”, if not, delete the small file and Error out - try again later.
It would be good to not have to download the file IF is not the correct file or not even available..... kinda gotten a little bigger problem than a simple download script.
Happy to try as many options as possible to get a functioning script and learn in the process.
Appreciate the feedback and assistance to my steep learning curve....
I think you're missing my point. The variable is not dependent on the results of the curl/wget in any way, as coded. It merely contains the name of the file for which you are searching.
You need to capture the results of the web query into another variable and test the contents of that.
But you are correct that any query is probably going to give you a result. The only time I think what wouldn't happen is if the server is not found at all. I'm not familiar with curl/wget, but I think you'll need to search within the resulting response to see if it contains $file...or something like that. Another use for the new variable
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.