Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
10-15-2005, 06:19 PM
|
#1
|
Member
Registered: Oct 2004
Location: Texas
Distribution: Ubuntu - Home, RHEL4 - Server
Posts: 96
Rep:
|
Web Application to grab large files from web addresses
Anyone have any idea for a method to grab large files from other http locations in a web control panel environment? Or know of a way to do it already? Something similar to linux's wget but for web applications.
Thanks,
farmerjoe
|
|
|
10-16-2005, 01:04 AM
|
#2
|
Senior Member
Registered: May 2004
Location: In the DC 'burbs
Distribution: Arch, Scientific Linux, Debian, Ubuntu
Posts: 4,290
|
Not sure exactly what you need it to do, but if you can't find any other solution and you know perl, then you can use the LWP module. It will lewt you craft custom HTTP requests and send them to a server. This is very flexible and powerful. Do you want to do this from a Web CP environment or do you want to use it to make requests and get responses friom a remote CP?
|
|
|
10-16-2005, 02:46 AM
|
#3
|
Member
Registered: Oct 2004
Location: Texas
Distribution: Ubuntu - Home, RHEL4 - Server
Posts: 96
Original Poster
Rep:
|
well...
I have been looking into using curl in PHP since that what the CP is programmed in. However, my main problem is figuring out a way to display the progress of a large file download while its being downloaded. I assume one would have to loop while the contents are being grabbed by constantly checking the file size or something like that. Anyone know how to create some sort of web based file download progress meter when grabbing something with Curl in php?
|
|
|
10-16-2005, 09:49 AM
|
#4
|
Member
Registered: Mar 2005
Distribution: Gentoo
Posts: 232
Rep:
|
if it's in PHP, then A) you will have to make the max_execution_time in php.ini big enough to allow the complete download and B) While downloading the other file, u cannot do anything else, because the script it not threaded. Also, u cannot change what is outputted to the browser, so a progress bar would have to come before the footer of the page, thus, you wud see half a page with a = slowley moving across, which isnt too helpful.
It is however, possible to download files using cURL, and the script execution time can be changed on the fly (i think). The only way to do a progress bar is to load a page with an animated gif saying something like (Downloading...), where the dots slowly increace, and then instantly redirect to the php which will actually download the file. Don't 'echo' anything to the browser and the browser should hold the last page untill something is echoed, thus giving a seudo progress bar.
The only way to really do a progress bar is thru using AJAX (Asyncronous Javascript and XML). You would need some other php/cgi scripts that found the size of the saved data, and worked that out by the total size of the file you are downloading (either passed by $_GET[], or saved in a temp. file by the original script downloading the file (still dnt know how you would get hold of the total size, but it must be possible). The javascript then queries the seperate php/cgi script for the total amount dwnloaded and total size repeatedly, then updates the webpage using CSS and DOM
Sorry for talking in such a complicated manor, but it's hard to explain. Buck
|
|
|
All times are GMT -5. The time now is 08:58 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|