![](/img/trans.png)
[英]writing and saving CSV file from scraping data using python and Beautifulsoup4
[英]CSV not writing properly after scraping data for a website using Python and BeautifulSoup
从网站抓取数据后,编写CSV文件时遇到问题。 我的目的是抓取在美国发现的高尔夫球场的名称和地址列表。 我使用.get_text(separator=' ')
作为地址,以删除<Br>
以破坏地址的文本,但是当写入CSV时,我从893的交互中仅获得三个条目。我该怎么办因此我获得了适量的已抓取数据,以及如何修复脚本以使其能够正确抓取所有内容。
这是我的脚本:
import csv
import requests
from bs4 import BeautifulSoup
courses_list = []
for i in range(893): #893
url="http://sites.garmin.com/clsearch/courses/search?course=&location=&country=US&state=&holes=&radius=&lang=en&search_submitted=1&per_page={}".format(i*20)
r = requests.get(url)
soup = BeautifulSoup(r.text)
g_data2 = soup.find_all("div",{"class":"result"})
#print g_data
for item in g_data2:
try:
name = item.find_all("div",{"class":"name"})[0].text
except:
name=''
print "No Name found!"
try:
address= item.find_all("div",{"class":"location"})[0].get_text(separator=' ')
print address
except:
address=''
print "No Address found!"
course=[name,address]
courses_list.append(course)
with open ('Garmin_GC.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow([s.encode("utf-8") for s
如果那是您的缩进,那是错误的,您需要在循环中添加名称和地址,这应该添加所有数据:
import csv
import requests
from bs4 import BeautifulSoup
courses_list = []
with open('Garmin_GC.csv', 'w') as file:
for i in range(893): #893
url = "http://sites.garmin.com/clsearch/courses/search?course=&location=&country=US&state=&holes=&radius=&lang=en&search_submitted=1&per_page={}".format(
i * 20)
r = requests.get(url)
soup = BeautifulSoup(r.text)
g_data2 = soup.find_all("div", {"class": "result"})
for item in g_data2:
try:
name = item.find_all("div", {"class": "name"})[0].text
except IndexError::
name = ''
print "No Name found!"
try:
address = item.find_all("div", {"class": "location"})[0].get_text(separator=' ')
print address
except IndexError::
address = ''
print "No Address found!"
course = [name, address]
courses_list.append(course)
writer = csv.writer(file)
for row in courses_list:
writer.writerow([s.encode("utf-8") for s in row])
您可以在循环外打开文件,并在完成后写入一次,如果您不想将所有数据存储在列表中,只需编写每次迭代即可:
with open('Garmin_GC.csv', 'w') as file:
writer = csv.writer(file)
for i in range(3): #893
url = "http://sites.garmin.com/clsearch/courses/search?course=&location=&country=US&state=&holes=&radius=&lang=en&search_submitted=1&per_page={}".format(
i * 20)
r = requests.get(url)
soup = BeautifulSoup(r.text)
g_data2 = soup.find_all("div", {"class": "result"})
for item in g_data2:
try:
name = item.find_all("div", {"class": "name"})[0].text
except IndexError:
name = ''
print "No Name found!"
try:
address = item.find_all("div", {"class": "location"})[0].get_text(separator=' ')
print address
except IndexError:
address = ''
print "No Address found!"
writer.writerow([name.encode("utf-8"), address.encode("utf-8")])
如果您没有名字或地址,那么您可能想在例外中添加一个continue
,如果您想忽略缺少这两个或两者之一的数据。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.