Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
ondoho: Thank you for your reply, and acknowledging my statement regarding similarities. I can understand how they can "appear" to be similar, but be rest assured, they are for different tasks. I would not reinvent the wheel if there was a solution or workable script out there.
When you say "download and compare the files" - can I ask how you mean "compare"? Would this be by size/date/some tag/other?
I was trying to limit the download (if possible) as the stability of the website "appears" to be questionable with the bandwidth limit recently seen. I am more than open to considering this option (download and compare) but I was hoping to have investigated the "header" option -- assuming that the website is reporting correctly.
Both responses were tested on the same $url, however on one occasion, I changed the name of the file in $file (to something non-existent) and BOTH results were the same?!!? I would have thought that one of the outputs would have echoed out a 404 (rather than 301/200)?
Is there something that I am missing - or jumping at shadows due to the headers not reporting correctly (all along....)?
Based on your last post, it seems you may need to concentrate on fundamental debugging of your scripts.
All I have here, as opposed to direct suggestions about the exact problems you're having, are:
Use the "set -xv" command near the top of your script to enable verbose debugging, it helps
Use the echo command to echo out the information stored in a variable or in results
Use built-in variables like $? which gives you error or exit status
Debug one section at a time. Everything you can type into the command line, you can code into a bash script, so you can test each line on the command line in advance of scripting it
Like a program, and especially one that has greater complexity than a single operation and single line, you need to break down the functions you are performing and validate each of them in turn so that you have confidence you got each operation coded well
In my signature is a bash blog I wrote a long time ago, which covers a lot of this stuff, it also gives references to the standard bash guides. There are examples in that blog which illustrate how to do things and what the results appear as. Granted I do not cover much related to loops or case statements, most of what I've done are collections of commands through a script, but at least it can give you some ideas to debug.
For each time I've been stuck and constantly iterating on something, I've learned to take a step back and inspect what I'm really doing, what my goal is, and whether or not I'm doing things in a good fashion.
Best of luck, I think you'll find that you may be more successful if instead of concentrating on that tree, that you consider the forest first. If that analogy is confusing, I'll say it again, "you'll find that you may be more successful if instead of concentrating on getting that one thing done, that you consider the overall script and how to debug it first."
I have run the code in #11. I am running this as a BASH script - will these work in BASH as some look a little different and never seen them before...
That's probably just because I was tired and should have waited until today to respond - go with Chris's syntax in #14
Quote:
The question that I have is that it appears (if I'm reading it right?) to be checking the directory - not the actual file headers. The reason I ask is that this particular $file (at time of running the script) is not even uploaded on the server - yet returning 200 OK
The directory is still there but there is no file uploaded as yet (or by the correct name which I have checked).
When I change the name if $file (file="nofilehere.txt") it is still giving the response of "200 OK" - without there being a file found at $url/notfilehere.txt
I am getting a little lost in the header search I think - which is throwing out the result.
If I change the code in the response to
Code:
response=$(curl -LSsI -z "$file" "$url/$file")
Would that check the $file (found at $url/$file -- where the file is stored) rather than just the $url?
I had trouble understanding what you were saying with those first three lines until I reached that last bit. :/
The $url variable should contain the full location of the resource you want to download.
The $file variable should contain the local copy that you want to check the date of the compare against. If you're downloading and comparing, you will need two variables - one for the known good copy (for -z parameter), and one to define where the most recently downloaded goes (for -o parameter).
Maybe you want to define your variables like this:
(If the server does actually correctly return 304 statuses then you don't necessarily need the two files/file variables.)
Quote:
Originally Posted by orangepeel190
I would not reinvent the wheel if there was a solution or workable script out there.
Most people don't do it willingly, they just don't realise they're doing it - it's still possible that's the case here...
Quote:
I was trying to limit the download (if possible) as the stability of the website "appears" to be questionable with the bandwidth limit recently seen. I am more than open to considering this option (download and compare) but I was hoping to have investigated the "header" option -- assuming that the website is reporting correctly.
How similar are the files?
If the server supports it, sending a Range header to get (e.g.) the first and/or last 100 bytes of the content may be enough to tell it's the same file or a different file, without using excessive amounts of bandwidth.
Depends on the content type of file, which I don't see mentioned anywhere?
Also, if you're already in communication with the admin, I'd definitely ask if they can provide a low effort ready/not-ready marker for you - as suggested, it may be they've already got something.
In the meantime, whilst you're still learning and testing your script, maybe you should setup your own local server - that'll help with their bandwidth, speed up your responses, and give you a controlled environment to try different responses.
Hi boughtonp - thanks for your tips! I will investigate the Range header. I am trialing this between two similar servers to determine if the primary server I am targeting is returning the correct header - otherwise I am jumping at shadows!
I will make amendments to me script, based on your suggestions and see what it reports.
As for the type of files, they will vary from .mp3 to text and documents - depending on the week or task being undertaken. The name of the file shouldn’t change as it gets up dated. Occasionally one may see a date stamp, but this is easy to script in and be accounted for.
rtmistler - thank you for your tips, especially Tip 1 and 3. I guess the biggest problem is having a go, reading many webpages and trying to adapt for the task. There are times when you are searching topics by the wrong name/title - leaving you stuck in a roundabout until assistance comes and guides you down the right path and show the correct way (code) for the task.
Placing code into a command line I have been doing in parts, but again I have to place the correctcode in to return the “expected” result.
Appreciate your assistance and I again, thank you for your on going feedback and assistance. I’ll keep chipping away.
Excellent advice by rtmistler as usual.
Definitely debug one small piece of code at a time, optionally in a completely separate prog file, then add it into the real one.
Re Fns at the top: that's because bash (any shell) is an interpreter, not a compiler, so it can only run code it has already seen ie it can't look forward in the code file.
C is compiled+linked and then the executable is run, so forward references are fine.
Perl is sort of in between; it's compiled-on-the-fly & the in-memory version is run, which means that (like C) it can use forward references.
Thank you for your response. Now that make sense ... I was wondering why this was the case .... I will be sure to implement this into future code. I was unaware of the reason behind it - thinking something was preventing the script running through to that section. Clearly - it hadn't "seen" the code as yet
Brilliant suggestions and think its great how much info and experience is out there to assist someone like myself to navigate their way through the maze of code.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.