![](/img/trans.png)
[英]How to loop & scraping data for multiple pages using python and beautifulsoup4
[英]How can I loop scraping data for multiple pages in a website using python and beautifulsoup4
我试图从 PGA.com 网站上抓取数据以获取美国所有高尔夫球场的表格。 在我的 CSV 表中,我想包括高尔夫球场的名称、地址、所有权、网站、电话号码。 有了这些数据,我想对其进行地理编码并将其放入地图并在我的计算机上有一个本地副本
我使用 Python 和 Beautiful Soup4 来提取我的数据。 我已经尽可能地提取数据并将其导入到 CSV 文件中,但是我现在遇到了从 PGA 网站上的多个页面抓取数据的问题。 我想提取所有高尔夫球场,但我的脚本仅限于一页,我想将其循环播放,以便它可以从 PGA 站点中找到的所有页面中捕获高尔夫球场的所有数据。 大约有 18000 个黄金课程和 900 页捕获数据
下面附上我的脚本。 我需要有关创建代码的帮助,这些代码将从 PGA 网站捕获所有数据,而不仅仅是一个站点,而是多个站点。 通过这种方式,它将为我提供美国黄金课程的所有数据。
这是我的脚本如下:
import csv
import requests
from bs4 import BeautifulSoup
url = "http://www.pga.com/golf-courses/search?searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0"
r = requests.get(url)
soup = BeautifulSoup(r.content)
g_data1=soup.find_all("div",{"class":"views-field-nothing-1"})
g_data2=soup.find_all("div",{"class":"views-field-nothing"})
courses_list=[]
for item in g_data2:
try:
name=item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
except:
name=''
try:
address1=item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
except:
address1=''
try:
address2=item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
except:
address2=''
try:
website=item.contents[1].find_all("div",{"class":"views-field-website"})[0].text
except:
website=''
try:
Phonenumber=item.contents[1].find_all("div",{"class":"views-field-work-phone"})[0].text
except:
Phonenumber=''
course=[name,address1,address2,website,Phonenumber]
courses_list.append(course)
with open ('filename5.csv','wb') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)
#for item in g_data1:
#try:
#print item.contents[1].find_all("div",{"class":"views-field-counter"})[0].text
#except:
#pass
#try:
#print item.contents[1].find_all("div",{"class":"views-field-course-type"})[0].text
#except:
#pass
#for item in g_data2:
#try:
#print item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
#except:
#pass
#try:
#print item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
#except:
#pass
#try:
#print item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
#except:
#pass
该脚本一次仅捕获 20 个,我想在一个脚本中捕获所有内容,该脚本占 18000 个高尔夫球场和 900 页以抓取表单。
PGA网站的搜索有多个页面,url遵循以下模式:
http://www.pga.com/golf-courses/search?page=1 # Additional info after page parameter here
这意味着您可以读取页面的内容,然后将 page 的值更改为 1,然后读取下一页......等等。
import csv
import requests
from bs4 import BeautifulSoup
for i in range(907): # Number of pages plus one
url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)
# Your code for each individual page here
如果你还在阅读这篇文章,你也可以试试这个代码......
from urllib.request import urlopen
from bs4 import BeautifulSoup
file = "Details.csv"
f = open(file, "w")
Headers = "Name,Address,City,Phone,Website\n"
f.write(Headers)
for page in range(1,5):
url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(page)
html = urlopen(url)
soup = BeautifulSoup(html,"html.parser")
Title = soup.find_all("div", {"class":"views-field-nothing"})
for i in Title:
try:
name = i.find("div", {"class":"views-field-title"}).get_text()
address = i.find("div", {"class":"views-field-address"}).get_text()
city = i.find("div", {"class":"views-field-city-state-zip"}).get_text()
phone = i.find("div", {"class":"views-field-work-phone"}).get_text()
website = i.find("div", {"class":"views-field-website"}).get_text()
print(name, address, city, phone, website)
f.write("{}".format(name).replace(",","|")+ ",{}".format(address)+ ",{}".format(city).replace(",", " ")+ ",{}".format(phone) + ",{}".format(website) + "\n")
except: AttributeError
f.close()
写入 range(1,5) 的地方只需将其更改为 0, 到最后一页,您将获得 CSV 中的所有详细信息,我非常努力地以正确的格式获取您的数据,但这很难:)。
您将链接指向单个页面,它不会自行遍历每个页面。
第 1 页:
url = "http://www.pga.com/golf-courses/search?searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0"
第2页:
http://www.pga.com/golf-courses/search?page=1&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
第 907 页: http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
://www.pga.com/golf-courses/search http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
page=906&searchbox http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
Course%20Name&searchbox_zip http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
ZIP&distance http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
price_range http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
course_type http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
both&has_events http://www.pga.com/golf-courses/search?page=906&searchbox=Course%20Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0
由于您运行的是第 1 页,因此您只会得到 20。您需要创建一个循环来遍历每个页面。
您可以首先创建一个执行一页的函数,然后迭代该函数。
search?
之后就search?
在 url 中,从第 2 page=1
开始, page=1
开始增加,直到它是page=906
第 907 page=906
。
我注意到第一个解决方案重复了第一个实例,这是因为 0 页和 1 页是同一页。 这是通过在 range 函数中指定起始页来解决的。 下面的例子...
for i in range(1, 907): #Number of pages plus one
url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content, "html5lib") #Can use whichever parser you prefer
# Your code for each individual page here
有同样的问题,上面的解决方案不起作用。 我通过考虑 cookie 解决了我的问题。 请求会话有帮助。 创建一个会话,它会通过将 cookie 插入所有编号的页面来拉出您需要的所有页面。
import csv
import requests
from bs4 import BeautifulSoup
url = "http://www.pga.com/golf-courses/search?searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0"
s = requests.Session()
r = s.get(url)
PGA 网站已更改此问题已被询问。
似乎他们通过以下方式组织所有课程:州 > 城市 > 课程
鉴于这种变化和这个问题的流行,今天我将如何解决这个问题。
第 1 步 - 导入我们需要的所有内容:
import time
import random
from gazpacho import Soup # https://github.com/maxhumber/gazpacho
from tqdm import tqdm # to keep track of progress
第 2 步 - 抓取所有状态 URL 端点:
URL = "https://www.pga.com"
def get_state_urls():
soup = Soup.get(URL + "/play")
a_tags = soup.find("ul", {"data-cy": "states"}, mode="first").find("a")
state_urls = [URL + a.attrs['href'] for a in a_tags]
return state_urls
state_urls = get_state_urls()
第 3 步 - 编写一个函数来抓取所有城市链接:
def get_state_cities(state_url):
soup = Soup.get(state_url)
a_tags = soup.find("ul", {"data-cy": "city-list"}).find("a")
state_cities = [URL + a.attrs['href'] for a in a_tags]
return state_cities
state_url = state_urls[0]
city_links = get_state_cities(state_url)
第 4 步 - 编写一个函数来抓取所有课程:
def get_courses(city_link):
soup = Soup.get(city_link)
courses = soup.find("div", {"class": "MuiGrid-root MuiGrid-item MuiGrid-grid-xs-12 MuiGrid-grid-md-6"}, mode="all")
return courses
city_link = city_links[0]
courses = get_courses(city_link)
第 5 步 - 编写一个函数来解析有关课程的所有有用信息:
def parse_course(course):
return {
"name": course.find("h5", mode="first").text,
"address": course.find("div", {'class': "jss332"}, mode="first").strip(),
"url": course.find("a", mode="first").attrs["href"]
}
course = courses[0]
parse_course(course)
第 6 步 - 遍历所有内容并保存:
all_courses = []
for state_url in tqdm(state_urls):
city_links = get_state_cities(state_url)
time.sleep(random.uniform(1, 10) / 10)
for city_link in city_links:
courses = get_courses(city_link)
time.sleep(random.uniform(1, 10) / 10)
for course in courses:
info = parse_course(course)
all_courses.append(info)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.