简体   繁体   中英

How can write scraped content to a CSV file?

I need some help to save the output from a basic web scraper to a CSV file.

Here is the code:

from urllib.request import urlopen
from bs4 import BeautifulSoup
import csv

html_ = urlopen("some_url")
bsObj_ = BeautifulSoup(html_, "html.parser")
nameList_ = bsObj_2.findAll("div", {"class":"row proyecto_name_venta"})

for name in nameList_:

    print(name.get_text())

Specifically, I want to save the name.get_text() result in a CSV file.

If the elements in nameList_ are rows with the columns delimited by ',' try this:

import csv

with open('out.csv', 'w') as outf:
    writer = csv.writer(outf)
    writer.writerows(name.get_text().split(',') for name nameList_)

If nameList_.get_text() is just a string and you want to write a single column CSV, you might try this:

import csv

with open('out.csv', 'w') as outf:
    writer = csv.writer(outf)
    writer.writerows([name.get_text()] for name in nameList_)

This is a pretty comprehensive example of what you asked for . . . .

import urllib2

listOfStocks = ["AAPL", "MSFT", "GOOG", "FB", "AMZN"]

urls = []

for company in listOfStocks:
    urls.append('http://real-chart.finance.yahoo.com/table.csv?s=' + company + '&d=6&e=28&f=2015&g=m&a=11&b=12&c=1980&ignore=.csv')

Output_File = open('C:/Users/rshuell001/Historical_Prices.csv','w')

New_Format_Data = ''

for counter in range(0, len(urls)):

    Original_Data = urllib2.urlopen(urls[counter]).read()

    if counter == 0:
        New_Format_Data = "Company," + urllib2.urlopen(urls[counter]).readline()

    rows = Original_Data.splitlines(1)

    for row in range(1, len(rows)):

        New_Format_Data = New_Format_Data + listOfStocks[counter] + ',' + rows[row]

Output_File.write(New_Format_Data)
Output_File.close()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM