簡體   English   中英

嘗試將數據從爬網導出到csv文件

[英]Trying to export the data from the crawl to a csv file

我在網上找到了此代碼,並且想使用它,但是找不到將導出的數據導出到csv文件的方法。

import urllib
import scrapy
import json
import csv
from bs4 import BeautifulSoup


url = "http://www.straitstimes.com/tags/malaysia-crimes"

html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)

# kill all script and style elements
for script in soup(["script", "style"]):
   script.extract()    # rip it out

# get text
text = soup.body.get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("    "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print(text)

以下似乎適用於我認為您想要的:

我使用xlwt包創建,寫入和保存工作簿,然后使用循環遍歷每一行文本並將其寫入工作簿。 我將其保存為testing.csv

import urllib
import scrapy
import json
import csv
from bs4 import BeautifulSoup
from xlwt import Workbook


url = "http://www.straitstimes.com/tags/malaysia-crimes"

html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)

# create excel workbook
wb = Workbook()
sheet1 = wb.add_sheet('Sheet 1')

# kill all script and style elements
for script in soup(["script", "style"]):
   script.extract()    # rip it out

# get text
text = soup.body.get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("    "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print(text)

# go through each line and print to a new row in excel
counter = 1
for text_to_write in text.splitlines():
   sheet1.write(counter,1,text_to_write)
   counter = counter + 1

wb.save('testing.csv')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM