[英]Issues with outputting the scraped data to a csv file using python and beautiful soup
[英]Export multiple scraped files in python from beautiful soup to a cvs file
我有一个要抓取并整理成一个csv文件的URL的csv列表。 我希望每个URL中的数据在csv文件中为一行。 我有大约19000个网址需要抓取,但是我试图仅使用少数几个来弄清楚。 我可以抓取文件并在终端中查看它们,但是将它们导出到csv文件时,只会显示最后一个文件。
网址在csv文件中显示为:
http://www.gpo.gov/fdsys/pkg/CREC-2005-01-26/html/CREC-2005-01-26-pt1-PgH199-6.htm
http://www.gpo.gov/fdsys/pkg/CREC-2005-01-26/html/CREC-2005-01-26-pt1-PgH200-3.htm
我觉得我的循环做错了什么,但似乎无法弄清楚哪里。 任何帮助将不胜感激!
到目前为止,这是我正在使用的东西:
import urllib
from bs4 import BeautifulSoup
import csv
import re
import pandas as pd
import requests
with open('/Users/test/Dropbox/one_minute_json/Extracting Data/a_2005_test.csv') as f:
reader = csv.reader(f)
for row in reader:
html = urllib.urlopen(row[0])
r = requests.get(html)
soup = BeautifulSoup(r, "lxml")
for item in soup:
volume = int(re.findall(r"Volume (\d{1,3})", soup.title.text)[0])
print(volume)
issue = int(re.findall(r"Issue (\d{1,3})", soup.title.text)[0])
print(issue)
date = re.findall(r"\((.*?)\)", soup.title.text)[0]
print(date)
page = re.findall(r"\[Page (.*?)]", soup.pre.text.split('\n')[3])[0]
print(page)
title = soup.pre.text.split('\n\n ')[1].strip()
print(title)
name = soup.pre.text.split('\n ')[2]
print(name)
text = soup.pre.text.split(')')[2]
print(text)
df = pd.DataFrame()
df['volume'] = [volume]
df['issue'] = [issue]
df['date'] = [date]
df['page'] = [page]
df['title'] = [title]
df['name'] = [name]
df['text'] = [text]
df.to_csv('test_scrape.csv', index=False)
谢谢!
缩进已完全关闭,请尝试以下操作:
from bs4 import BeautifulSoup
import csv
import re
import pandas as pd
import requests
with open('/Users/test/Dropbox/one_minute_json/Extracting Data/a_2005_test.csv') as f:
reader = csv.reader(f)
index = 0
df = pd.DataFrame(columns=["Volume", "issue", "date", "page", "title", "name", "text"])
for row in reader:
r = requests.get(row[0])
soup = BeautifulSoup(r.text, "lxml")
for item in soup:
volume = int(re.findall(r"Volume (\d{1,3})", soup.title.text)[0])
issue = int(re.findall(r"Issue (\d{1,3})", soup.title.text)[0])
date = re.findall(r"\((.*?)\)", soup.title.text)[0]
page = re.findall(r"\[Page (.*?)]", soup.pre.text.split('\n')[3])[0]
title = soup.pre.text.split('\n\n ')[1].strip()
name = soup.pre.text.split('\n ')[2]
text = soup.pre.text.split(')')[2]
row = [volume, issue, date, page, title, name, text]
df.loc[index] = row
index += 1
df.to_csv('test_scrape.csv', index=False)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.