简体   繁体   中英

trouble writing to csv with BeautifullSoup and Python

I have a problem with writing the scraped data to a csv file. While the pages are loaded and the first part of the scripts works, the writing to csv causes a problem. Now I tried to make integers from the scraped data, because this worked well for me in other projects. However, in this project there seems to be problem.

This error code I get is:

ValueError: invalid literal for int() with base 10: '\nNotes To A Friend: The Experience\n'

The qustion I have is: How can I write the data to csv in a more sophisticated way?

Code:

    import urllib.request
    from bs4 import BeautifulSoup
    from selenium import webdriver
    import pandas as pd
    import time 
    from datetime import datetime
    from collections import OrderedDict
    import re

    browser = webdriver.Firefox()
    browser.get('https://www.kickstarter.com/discover?ref=nav')
    categories = browser.find_elements_by_class_name('category-container')

    category_links = []
    for category_link in categories:
#Each item in the list is a tuple of the category's name and its link.
                category_links.append((str(category_link.find_element_by_class_name('f3').text),
                     category_link.find_element_by_class_name('bg-white').get_attribute('href')))


scraped_data = []
now = datetime.now()
counter = 1

for category in category_links:
browser.get(category[1])
browser.find_element_by_class_name('sentence-open').click()
time.sleep(2)
browser.find_element_by_id('category_filter').click()
time.sleep(2)

for i in range(27):
    try:
        time.sleep(2)
        browser.find_element_by_id('category_'+str(i)).click()
        time.sleep(2)            
    except:
        pass


projects = []
for project_link in browser.find_elements_by_class_name('clamp-3'):
    projects.append(project_link.find_element_by_tag_name('a').get_attribute('href'))

for counter, project in enumerate(projects): 
    page1 = urllib.request.urlopen(projects[counter])
    soup1 = BeautifulSoup(page1, "lxml")
    page2 = urllib.request.urlopen(projects[counter].split('?')[0]+'/community')
    soup2 = BeautifulSoup(page2, "lxml")
    time.sleep(2)
    print(str(counter)+': '+project+'\nStatus: Started.')
    project_dict = OrderedDict()
    project_dict['Category'] = category[0]
    browser.get(project)
    project_dict['Name'] = int(soup1.find(class_='type-24 type-28-sm type-38-md navy-700 medium mb3').text)

    project_dict['Home State'] = int(soup1.find(class_='nowrap navy-700 flex items-center medium type-12').text)

    try:
        project_dict['Backer State'] = int(soup2.find(class_='location-list-wrapper js-location-list-wrapper').text)
    except:
        pass

    print('Status: Done.')
    counter+=1
    scraped_data.append(project_dict)

    later = datetime.now()
    diff = later - now

    print('The scraping took '+str(round(diff.seconds/60.0,2))+' minutes,         and                         scraped '+str(len(scraped_data))+' projects.')

    df = pd.DataFrame(scraped_data)
    df.to_csv('kickstarter-data1.csv')

A couple of changes to be made here since stopping integer conversion of parsed text:

  • Initialize BeautifulSoup using the html5lib this way:
BeautifulSoup(page1, "html5lib")
  • Read the response. BeautifulSoup needs to be pass a str object as the first argument.


    response = urllib.request.urlopen(projects[counter])
    page1 = response.read()
    soup1 = BeautifulSoup(page1, "html5lib")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM