繁体   English   中英

如何使用 R 中的 web 抓取从 Power BI 仪表板获取表格

[英]How to get a table from power BI dashboard using web scraping in R

我正在使用R进行数据提取任务。 数据是在 Power BI 仪表板中分配的,因此获取它非常麻烦。 我在这里找到了我的解决方案:

使用 R 抓取网站的 Power BI 仪表板

但我不确定如何在我的页面中导航以获取组件并提取表格。 我的代码是下一个:

library(wdman)
library(RSelenium)
library(xml2)
library(selectr)
library(tidyverse)
library(rvest)

# using wdman to start a selenium server
remDr <- rsDriver(
  port = 4445L,
  browser = "firefox"
)
#remDr$open()
remDr <- remoteDriver(port = 4445L,browser = "firefox")

# open a new Tab on Chrome
remDr$open()

# navigate to the site you wish to analyze
report_url <- "https://app.powerbi.com/view?r=eyJrIjoiOGI5Yzg2MGYtZmNkNy00ZjA5LTlhYTYtZTJjNjg2NTY2YTlmIiwidCI6ImI1NDE0YTdiLTcwYTYtNGUyYi05Yzc0LTM1Yjk0MDkyMjk3MCJ9"
remDr$navigate(report_url)

# fetch the data
data_table <- read_html(remDr$getPageSource()[[1]]) %>%
  querySelector("div.pivotTable")

虽然 selenium 进程工作,我不知道如何得到我的表:

在此处输入图像描述

蓝色箭头显示我想要的表格,然后我需要移动到其他页面以提取剩余的表格。 但我想如果我可以为第一页做,其他页面会是一样的。

非常感谢

这些表有点棘手,因为只有滚动激活它们后,新行才会出现在页面源代码中。 我的解决方案逐行逐行添加,并将它们单独添加到整个数据帧中,如果行数用完,则滚动。 RSelenium 在这个任务中是有问题的,所以这里是常规 Python Selenium 中的结果,如果你没问题的话:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
import pandas as pd

driver = webdriver.Chrome()

driver.get("https://app.powerbi.com/view?r=eyJrIjoiOGI5Yzg2MGYtZmNkNy00ZjA5LTlhYTYtZTJjNjg2NTY2YTlmIiwidCI6ImI1NDE0YTdiLTcwYTYtNGUyYi05Yzc0LTM1Yjk0MDkyMjk3MCJ9")

def scrape_powerbi_table(table_xpath):
    scroll_button = driver.find_element(By.XPATH, table_xpath + "/div/div/div[2]/div[4]/div[2]")
    col_names = [i.text for i in driver.find_elements(By.XPATH, table_xpath + "/div/div/div[2]/div[1]/div[2]/div[2]/div/div")]
    df = pd.DataFrame(columns = col_names)
    more_rows_left = True
    row_count = 2
    while more_rows_left == True:
        data = driver.find_elements(By.XPATH, table_xpath + "/div/div/div[2]/div[1]/div[4]/div/div[@aria-rowindex='" + str(row_count) + "']/div")
        current_row = [i.get_attribute("innerHTML") for i in data][1:]
        if not current_row:
            try:
                for i in range(10):
                    scroll_button.click()
                data = driver.find_elements(By.XPATH, table_xpath + "/div/div/div[2]/div[1]/div[4]/div/div[@aria-rowindex='" + str(row_count) + "']/div")
                current_row = [i.get_attribute("innerHTML") for i in data][1:]
            except Exception:
                break
        if not current_row:
            break
        df.loc[len(df)] = current_row
        row_count += 1
    return df

df1 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[8]/transform/div/div[3]/div/visual-modern")
next_button = driver.find_element(By.XPATH, "//*[@id='embedWrapperID']/div[2]/logo-bar/div/div/div/logo-bar-navigation/span/button[2]")
next_button.click()
df2 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[8]/transform/div/div[3]/div/visual-modern")
df3 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[9]/transform/div/div[3]/div/visual-modern")
next_button.click()
next_button.click()
df4 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[5]/transform/div/div[3]/div/visual-modern")
df5 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[7]/transform/div/div[3]/div/visual-modern")
next_button.click()
df6 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[9]/transform/div/div[3]/div/visual-modern")
df7 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[10]/transform/div/div[3]/div/visual-modern")
next_button.click()
next_button.click()
df8 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[2]/transform/div/div[3]/div/visual-modern")
next_button.click()
df9 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[5]/transform/div/div[3]/div/visual-modern")
df10 = scrape_powerbi_table("//*[@id='pvExplorationHost']/div/div/exploration/div/explore-canvas/div/div[2]/div/div[2]/div[2]/visual-container-repeat/visual-container[6]/transform/div/div[3]/div/visual-modern")

此外,为了您的方便,这里有十个 csv 格式的文件。 让我知道这个是否奏效

https://mega.nz/folder/LtVDiCyQ#5iW1mkd1VVTmcPApeqfGFA

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM