簡體   English   中英

將python中的多個抓取文件從漂亮的湯導出到cvs文件

[英]Export multiple scraped files in python from beautiful soup to a cvs file

我有一個要抓取並整理成一個csv文件的URL的csv列表。 我希望每個URL中的數據在csv文件中為一行。 我有大約19000個網址需要抓取,但是我試圖僅使用少數幾個來弄清楚。 我可以抓取文件並在終端中查看它們,但是將它們導出到csv文件時,只會顯示最后一個文件。

網址在csv文件中顯示為:

http://www.gpo.gov/fdsys/pkg/CREC-2005-01-26/html/CREC-2005-01-26-pt1-PgH199-6.htm

http://www.gpo.gov/fdsys/pkg/CREC-2005-01-26/html/CREC-2005-01-26-pt1-PgH200-3.htm

我覺得我的循環做錯了什么,但似乎無法弄清楚哪里。 任何幫助將不勝感激!

到目前為止,這是我正在使用的東西:

import urllib
from bs4 import BeautifulSoup
import csv
import re
import pandas as pd
import requests

with open('/Users/test/Dropbox/one_minute_json/Extracting Data/a_2005_test.csv') as f:
reader = csv.reader(f)

for row in reader:
    html = urllib.urlopen(row[0])
    r = requests.get(html)
    soup = BeautifulSoup(r, "lxml")

for item in soup:

volume = int(re.findall(r"Volume (\d{1,3})", soup.title.text)[0])
print(volume)

issue = int(re.findall(r"Issue (\d{1,3})", soup.title.text)[0])
print(issue)



date = re.findall(r"\((.*?)\)", soup.title.text)[0]
print(date)

page = re.findall(r"\[Page (.*?)]", soup.pre.text.split('\n')[3])[0]
print(page)

title = soup.pre.text.split('\n\n  ')[1].strip()
print(title)

name = soup.pre.text.split('\n ')[2]
print(name)

text = soup.pre.text.split(')')[2]
print(text)

df = pd.DataFrame()
df['volume'] = [volume]
df['issue'] = [issue]
df['date'] = [date]
df['page'] = [page]
df['title'] = [title]
df['name'] = [name]
df['text'] = [text]

df.to_csv('test_scrape.csv', index=False)

謝謝!

縮進已完全關閉,請嘗試以下操作:

from bs4 import BeautifulSoup
import csv
import re
import pandas as pd
import requests

with open('/Users/test/Dropbox/one_minute_json/Extracting Data/a_2005_test.csv') as f:
    reader = csv.reader(f)

    index = 0    
    df = pd.DataFrame(columns=["Volume", "issue", "date", "page", "title", "name", "text"])

    for row in reader:
        r = requests.get(row[0])
        soup = BeautifulSoup(r.text, "lxml")

        for item in soup:
            volume = int(re.findall(r"Volume (\d{1,3})", soup.title.text)[0])
            issue = int(re.findall(r"Issue (\d{1,3})", soup.title.text)[0])
            date = re.findall(r"\((.*?)\)", soup.title.text)[0]
            page = re.findall(r"\[Page (.*?)]", soup.pre.text.split('\n')[3])[0]
            title = soup.pre.text.split('\n\n  ')[1].strip()
            name = soup.pre.text.split('\n ')[2]
            text = soup.pre.text.split(')')[2]
            row = [volume, issue, date, page, title, name, text]

            df.loc[index] = row
            index += 1

    df.to_csv('test_scrape.csv', index=False)    

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM