简体   繁体   English

UnicodeEncodeError:使用Python和beautifulsoup4抓取数据

[英]UnicodeEncodeError: Scraping data using Python and beautifulsoup4

I am trying to scrape data from the PGA website to get a list of all the golf courses in the USA. 我正在尝试从PGA网站上获取数据,以获取美国所有高尔夫球场的列表。 I want to scrape the data and input into a CSV file. 我想抓取数据并将其输入到CSV文件中。 My problem is after running my script I get this error. 我的问题是运行脚本后出现此错误。 Can anyone help fix this error and how I can go about extracting the data? 任何人都可以帮助解决此错误,以及如何提取数据?

Here is the error message: 这是错误消息:

File "/Users/AGB/Final_PGA2.py", line 44, in 文件“ /Users/AGB/Final_PGA2.py”,第44行
writer.writerow(row) writer.writerow(行)

UnicodeEncodeError: 'ascii' codec can't encode character u'\“' in position 35: ordinal not in range(128) UnicodeEncodeError:'ascii'编解码器无法在位置35编码字符u'\\ u201c':序数不在范围内(128)

Script Below; 下面的脚本;

import csv
import requests 
from bs4 import BeautifulSoup

courses_list = []
for i in range(906):      # Number of pages plus one 
    url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)

g_data2=soup.find_all("div",{"class":"views-field-nothing"})

for item in g_data2:
    try:
          name = item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
          print name
    except:
          name=''
    try:
          address1=item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
    except:
          address1=''
    try:
          address2=item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
    except:
          address2=''
    try:
          website=item.contents[1].find_all("div",{"class":"views-field-website"})[0].text
    except:
          website=''   
    try:
          Phonenumber=item.contents[1].find_all("div",{"class":"views-field-work-phone"})[0].text
    except:
          Phonenumber=''      

    course=[name,address1,address2,website,Phonenumber]

    courses_list.append(course)


with open ('PGA_Final.csv','a') as file:
          writer=csv.writer(file)
          for row in courses_list:
               writer.writerow(row)

You should not get the error on Python 3. Here's code example that fixes some unrelated issues in your code. 您不应该在Python 3上收到错误。这里的代码示例修复了代码中一些不相关的问题。 It parses specified fields on a given web-page and saves them as csv: 它解析给定网页上的指定字段,并将其另存为csv:

#!/usr/bin/env python3
import csv
from urllib.request import urlopen
import bs4 # $ pip install beautifulsoup4

page = 905
url = ("http://www.pga.com/golf-courses/search?page=" + str(page) +
       "&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0"
       "&course_type=both&has_events=0")
with urlopen(url) as response:
    field_content = bs4.SoupStrainer('div', 'views-field-nothing')
    soup = bs4.BeautifulSoup(response, parse_only=field_content)

fields = [bs4.SoupStrainer('div', 'views-field-' + suffix)
          for suffix in ['title', 'address', 'city-state-zip', 'website', 'work-phone']]

def get_text(tag, default=''):
    return tag.get_text().strip() if tag is not None else default

with open('pga.csv', 'w', newline='') as output_file:
    writer = csv.writer(output_file)
    for div in soup.find_all(field_content):
        writer.writerow([get_text(div.find(field)) for field in fields])
with open ('PGA_Final.csv','a') as file:
          writer=csv.writer(file)
          for row in courses_list:
               writer.writerow(row)

Change that to: 更改为:

with open ('PGA_Final.csv','a') as file:
          writer=csv.writer(file)
          for row in courses_list:
               writer.writerow(row.encode('utf-8'))

Or: 要么:

import codecs
....
with codecs.open('PGA_Final.csv','a', encoding='utf-8') as file:
          writer=csv.writer(file)
          for row in courses_list:
               writer.writerow(row)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM