繁体   English   中英

PYTHON:在循环中附加一个 dataframe

[英]PYTHON: appending a dataframe in a loop

我正在尝试从 2 个不同的 url 检索股票信息并将信息写入熊猫的数据框架。 但是,我不断收到错误。 有人可以帮我吗? 我对 python 很陌生,所以我的代码可能看起来很丑:D

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
import os
import requests
from bs4 import BeautifulSoup
import pandas as pd



headers= {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
    'Accept-Language': 'en-US,en;q=0.5',
    'Connection': 'keep-alive',
    'Upgrade-Insecure-Requests': '1',
    'Cache-Control': 'max-age=0'
}

PATH='C:\Program Files (x86)\chromedriver.exe'

options = Options()
options = webdriver.ChromeOptions()
options.add_argument('headless')
options.add_argument("--window-size=2550,1440")
s = Service('C:\Program Files (x86)\chromedriver.exe')
driver = webdriver.Chrome(PATH, options=options)
driver.implicitly_wait(10)

#maak een dataframe aan
dn=[]

def accept_cookies():
    try:
        driver.find_element(By.ID, 'accept-choices').click()
    except:
        print('fu')

stocklist=["FB","KLIC"]
for x in stocklist:
    url = f"https://stockanalysis.com/stocks/{x}/financials/"
    driver.get(url)
    driver.implicitly_wait(10)
    accept_cookies()
    driver.implicitly_wait(10)
    driver.find_element(By.XPATH, "//span[text()='Quarterly']").click()
    xlwriter = pd.ExcelWriter(f'financial statements1.xlsx', engine='xlsxwriter')
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    df = pd.read_html(str(soup), attrs={'id': 'financial-table'})[0]
    new_df = pd.concat(df)
    dn.to_excel(xlwriter, sheet_name='key', index=False)
    xlwriter.save()

pd.concat需要一个要连接的对象列表,而您只给了它df

所以我认为用pd.concat([df, new_df])替换pd.concat(df)并在 for 循环之前有new_df = pd.DataFrame()

如果.read_html()部分没有问题,您应该将df推送到数据框列表:

dflist =[]

for x in stocklist:
    url = f"https://stockanalysis.com/stocks/{x}/financials/"
    driver.get(url)
    driver.implicitly_wait(10)
    accept_cookies()
    driver.implicitly_wait(10)
    driver.find_element(By.XPATH, "//span[text()='Quarterly']").click()
    
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    dflist.append(pd.read_html(str(soup), attrs={'id': 'financial-table'})[0])

完成迭代,您可以简单地将数据框列表连接到一个:

xlwriter = pd.ExcelWriter(f'financial statements1.xlsx', engine='xlsxwriter')
pd.concat(dflist).to_excel(xlwriter, sheet_name='key', index=False)
xlwriter.save()

例子

dflist =[]

for x in stocklist:
    url = f"https://stockanalysis.com/stocks/{x}/financials/"
    driver.get(url)
    driver.implicitly_wait(10)
    accept_cookies()
    driver.implicitly_wait(10)
    driver.find_element(By.XPATH, "//span[text()='Quarterly']").click()
    
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    dflist.append(pd.read_html(str(soup), attrs={'id': 'financial-table'})[0])
    
xlwriter = pd.ExcelWriter(f'financial statements1.xlsx', engine='xlsxwriter')
pd.concat(dflist).to_excel(xlwriter, sheet_name='key', index=False)
xlwriter.save()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM