LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (http://www.linuxquestions.org/questions/programming-9/)
-   -   Python IndexError: list index out of range (Web Scrapper) (http://www.linuxquestions.org/questions/programming-9/python-indexerror-list-index-out-of-range-web-scrapper-863560/)

zcrxsir88 02-18-2011 11:08 AM

Python IndexError: list index out of range (Web Scrapper)
 
Hey all!

Having a bit of an issue with Python while trying to write a script to download every rar file on a webpage.

The script successfully downloads any link that doesn't contain any spaces, etc. But when it hits a url like:

http://www.insidepro.com/dictionaries/Belarusian (Classical Spelling).rar

It fails...I'm sure this is something simple, but I'm so new to python I'm not sure what to do!

Thank you in advance.



Code:

import urllib2
import os

os.system("curl http://www.insidepro.com/eng/download.shtml|grep -i rar|cut -d '\"' -f 2 > temp.out ")

infile =open('temp.out', 'r')

for url in infile:
        print url
#url = "http://download.thinkbroadband.com/10MB.zip"

        #url = target

        file_name = url.split('/')[-1]
        u = urllib2.urlopen(url)
        f = open(file_name, 'w')
        meta = u.info()
        file_size = int(meta.getheaders("Content-Length")[0])
        print "Downloading: %s Bytes: %s" % (file_name, file_size)

        file_size_dl = 0
        block_sz = 8192
        while True:
            buffer = u.read(block_sz)
            if not buffer:
                break

            file_size_dl += block_sz
            f.write(buffer)
            status = r"%10d  [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
            status = status + chr(8)*(len(status)+1)
            print status,

        f.close()


pgroover 02-18-2011 11:34 AM

Not really familiar with Python, but my first thought would be that the spaces should either be escaped, or the entire URL encapsulated within quotes.

Just my .02.

pgroover 02-18-2011 11:35 AM

Oh yeah, I forgot to mention it, but you could also look at the file it's attempting to download when it does get one with spaces.

zcrxsir88 02-18-2011 11:41 AM

Escaping with Quotes
 
Tried escaping iwht quotes...didn't work either!

pgroover 02-18-2011 11:53 AM

Have you tried looking at the filename it attempts to download when met with a URL with spaces?

Dogs 02-18-2011 10:58 PM

Look at the filename assignment. This may help you.

To make this one more versatile you can construct your os.system() line from user input



Code:


import urllib2
import os

# look here, too. Raw input would be your http://whatever.notcom
os_system_line = 'curl ' + raw_input() + '| grep -i rar | cut -d '\"' -f2 > temp.out ")

os.system(os_system_line)

infile =open('temp.out', 'r')

for url in infile:
        print url
#url = "http://download.thinkbroadband.com/10MB.zip"

        #url = target

        #Remember that Linux doesn't like spaces so much, and that Python strings are immutable, so operations on strings will return strings which can be further operated on.
        file_name = url.replace('http://www.insidepro.com/Dictionaries', '').replace(' ', '_')

        u = urllib2.urlopen(url)
        f = open(file_name, 'w')
        meta = u.info()
        file_size = int(meta.getheaders("Content-Length")[0])
        print "Downloading: %s Bytes: %s" % (file_name, file_size)

        file_size_dl = 0
        block_sz = 8192
        while True:
            buffer = u.read(block_sz)
            if not buffer:
                break

            file_size_dl += block_sz
            f.write(buffer)
            status = r"%10d  [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
            status = status + chr(8)*(len(status)+1)
            print status,

        f.close()


zcrxsir88 02-18-2011 11:57 PM

Still not working...O'well.
 
Nope..still craps the bed with a bunch of different errors...

I was trying to do this as a project rather than using bash scripting, but I guess trying to reinvent the wheel for fun is an exercise in futility when you don't completely understand the programming language at hand.

So back to the basics...wget it is!

Thank you all for the help! I really appreciate it!

-V

bgeddy 02-19-2011 04:44 AM

If your parsing html documents (or xml) you really should look at BeautifulSoup. It makes parsing html stuff as in webscraping a real doddle. I've bashed together a little python script that should do what you want downloading all the dictionaries from the page you showed in your original code. As you can see it's very small as BeautifulSoup does all the hard work. Anyway here it is :
Code:

from urllib2 import urlopen, quote
from BeautifulSoup import BeautifulSoup

page = urlopen("http://www.insidepro.com/eng/download.shtml")
soup = BeautifulSoup(page)
for item in soup.findAll('a', href=True):
    this_href = item["href"]
    if  u"/dictionaries/" in this_href:
        local_file = this_href.split("/")[-1]
        remote_file = quote(this_href, safe=":/")
        print "downloading: " + remote_file +" to: " + local_file
        rfile = urlopen(remote_file)
        with open(local_file, "w") as lfile:
            lfile.write(rfile.read())

This was put together very quickly and there is no error checking in it so obviously it needs some work if it is to be used in production. It works though.


All times are GMT -5. The time now is 10:09 AM.