繁体   English   中英

有没有更好,更简单的方法来下载多个文件?

[英]Is there a better, simpler way to download multiple files?

我在纽约市MTA网站上下载了一些旋转栅门数据,并想出了一个脚本来仅在Python上下载2017年数据。

这是脚本:

import urllib
import re

html = urllib.urlopen('http://web.mta.info/developers/turnstile.html').read()
links = re.findall('href="(data/\S*17[01]\S*[a-z])"', html)

for link in links:
    txting = urllib.urlopen('http://web.mta.info/developers/'+link).read()
    lin = link[20:40]
    fhand = open(lin,'w')
    fhand.write(txting)
    fhand.close()

有没有更简单的方法来编写此脚本?

按照@dizzyf的建议,您可以使用BeautifulSoup从网页中获取href值。

from BS4 import BeautifulSoup
soup = BeautifulSoup(html)
links = [link.get('href') for link in soup.find_all('a') 
                          if 'turnstile_17' in link.get('href')]

如果您不必使用Python获取文件(并且您正在使用wget命令在系统上),则可以将链接写入文件:

with open('url_list.txt','w') as url_file:
    for url in links:
        url_file.writeline(url)

然后使用wget下载它们:

$ wget -i url_list.txt

wget -i文件中的所有URL下载到当前目录中,并保留文件名。

下面的代码应满足您的需求。

import requests
import bs4
import time
import random
import re

pattern = '2017'
url_base = 'http://web.mta.info/developers/'
url_home = url_base + 'turnstile.html'
response = requests.get(url_home)
data = dict()

soup = bs4.BeautifulSoup(response.text)
links = [link.get('href') for link in soup.find_all('a', 
text=re.compile('2017'))]
for link in links:
    url = url_base + link
    print "Pulling data from:", url
    response = requests.get(url)
    data[link] = response.text # I don't know what you want to do with the data so here I just store it to a dict, but you could store it to a file as you did in your example.
    not_a_robot = random.randint(2, 15)
    print "Waiting %d seconds before next query." % not_a_robot
    time.sleep(not_a_robot) # some APIs will throttle you if you hit them too quickly

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM