download all the files of Version A3(1.0) on the web page
How to write a script to download all the files of Version A3(1.0) on the web page
ftp://ftp-sj.cisco.com/pub/mibs/supp...portlist.html? |
Code:
lynx -dump "ftp://ftp-sj.cisco.com/pub/mibs/supportlists/ace-appliance/ace-appliance-supportlist.html?" | grep -o "*.my" >file.txt |
if you have Python
Code:
#!/usr/bin/env python |
Quote:
grep -o "*.my" will create an empty file.txt. I do this: lynx -dump "ftp://ftp-sj.cisco.com/pub/mibs/supportlists/ace-appliance/ace-appliance-supportlist.html?" | grep ".my" >file.txt file.txt is: href="ftp://ftp.cisco.com/pub/mibs/v2/CISCO-AAA-SERVER-EXT-MIB.my">CISCO- class=SpellE>MIB.my</SPAN><BR></A><A href="ftp://ftp.cisco.com/pub/mibs/v2/CISCO-AAA-SERVER-MIB.my">CISCO-AAA- class=SpellE>MIB.my</SPAN></A><BR><A href="ftp://ftp.cisco.com/pub/mibs/v2/CISCO-ENHANCED-SLB-MIB.my">CISCO-EN class=SpellE>MIB.my</SPAN></A><BR><A href="ftp://ftp.cisco.com/pub/mibs/v2/CISCO-ENTITY-VENDORTYPE-OID-MIB.my" class=SpellE>MIB.my</SPAN></A><BR><A href="ftp://ftp.cisco.com/pub/mibs/v2/CISCO-IF-EXTENSION-MIB.my">CISCO-IF class=SpellE>MIB.my</SPAN></A><BR><A href="ftp://ftp.cisco.com/pub/mibs/v2/CISCO-IP-PROTOCOL-FILTER-MIB.my">CI class=SpellE>MIB.my</SPAN></A><BR><A ... |
Quote:
$ python Python 2.4.4 (#1, Oct 23 2006, 13:58:00) [GCC 4.1.1 20061011 (Red Hat 4.1.1-30)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> I put your script in a file and run it: $ ~/python/downloadFile.py Traceback (most recent call last): File "/home/powah/python/downloadFile.py", line 7, in ? for item in data: NameError: name 'data' is not defined |
Quote:
Code:
..... |
Quote:
Thanks! |
download all files from the web page
I want to download all files from the web page
ftp://ftp-sj.cisco.com/pub/mibs/supp...pportlist.html. I modify the script: #!/usr/bin/env python import urllib2 url="ftp://ftp-sj.cisco.com/pub/mibs/supportlists/vpn3000/vpn3000-supportlist.html" page=urllib2.urlopen(url) f=0 links=[] data=page.read().split("\n") for item in data: if "href" in item: item=item.replace('href="',"").strip() ind=item.index('">') links.append(item[:ind]) #grab all ftp links # download all links for link in links: filename=link.split("/")[-1] print "downloading ... " + filename u=urllib2.urlopen(link) p=u.read() open(filename,"w").write(p) Running the script has the following error. Please help. Thanks. $ ~/python/downloadFile2.py downloading ... v2 downloading ... ADMIN-AUTH-STATS-MIB.my downloading ... ALTIGA-ADDRESS-STATS-MIB.my downloading ... ALTIGA-BMGT-STATS-MIB.my downloading ... ALTIGA-CAP.my Traceback (most recent call last): File "/home/powah/python/downloadFile2.py", line 20, in ? u=urllib2.urlopen(link) File "/usr/lib/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib/python2.4/urllib2.py", line 381, in _open 'unknown_open', req) File "/usr/lib/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib/python2.4/urllib2.py", line 1053, in unknown_open raise URLError('unknown url type: %s' % type) urllib2.URLError: <urlopen error unknown url type: <dd><a ftp> |
Quote:
Code:
import urllib2,os,urlparse |
All times are GMT -5. The time now is 07:06 AM. |