SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I had, and continue to have difficulty downloading similar files. What appears to happen is that if I attempt to use HTTP in a browser (either Mozilla or Konquerer) to download a file, the file size ends up being about 3 or 4 times as large as it should be, and what's supposed to be a, say, 500K file ends up being 2M. This behavior unfortunately seems to be fairly consistent, and is especially disappointing because it generally makes any d/l attempts from Sourceforge an exercise in futility, at least for me. Heck, for big files, if I keep an eye on Download Manager, I can see the byte count start zooming upward as soon as the expected download should stop. Not too long ago I tried downloading a file that was supposed to be something like 9M, and much to my surprise once the progress bar had reached 100%, the byte count immediately started to increase all the way up to 28M, as if it were somehow decompressing itself on the fly. If anyone can explain why this is happening, or if I'm making some kind of obvious bonehead move, I'd be very interested.
Otherwise, as the previous poster mentioned, this behavior does not seem to affect FTP downloads. One good source for packages that seems to work pretty well for me is:
The problem is that the browsers
are semi-intelligent :) ... they download
the stuff, recognise the extension, and
un-zip for you on the fly, but don't rename
it, which is why the tools then fall over it
trying to unzip it (again) ... or at least I used
to have that problem with at least one version
of mozilla ;)
Thanks for the explanation Tink. Later tonight I'll run a few more experiments and see if I can't figure out how to get around this "help" that the browser is giving me.
On a different note, I've learned a lot by reading your posts, but I have to admit that I am absolutely baffled by your sig block. Can you give me a hint about where I might start to look to figure out the puzzle. For the life of me I don't have the faintest idea what it means, even though I would consider myself a quasi-geek. (Perhaps that's the problem, and the puzzle only becomes apparent to full fledged geeks). Regards, J.W.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.