簡體   English   中英

使用 BeautifulSoup 抓取 OSHA 網站

[英]Scraping OSHA website using BeautifulSoup

我正在尋求兩件主要事情的幫助:(1) 抓取網頁和 (2) 將抓取的數據轉換為 Pandas 數據幀(主要是這樣我可以輸出為 .csv,但現在只創建一個 Pandas df 就足夠了)。 這是我迄今為止為兩者所做的:

(1) 抓取網站:

  • 我正在嘗試抓取此頁面: https : //www.osha.gov/pls/imis/buildingment.inspection_detail? id = 1285328.015 & id = 1284178.015 & id = 1283809.015 & id = 1283549.015 & id =120156 我的最終目標是創建一個理想情況下只包含我正在尋找的信息的數據框(即,我只能為我的 df 選擇我感興趣的站點部分); 如果我現在必須提取所有數據也沒關系。
  • 從 URL 以及頁面頂部“快速鏈接參考”下方的 ID 超鏈接可以看出,該頁面上有五個不同的記錄。 我希望這些 ID/記錄中的每一個都被視為我的 Pandas df 中的一行。

編輯:感謝有用的評論,我在下表中包含了我最終想要的示例。 第一行代表列標題/名稱,第二行代表第一次檢查。

inspection_id   open_date   inspection_type close_conference    close_case  violations_serious_initial  
1285328.015     12/28/2017    referral        12/28/2017       06/21/2018         2

主要依賴於 BeautifulSoup4,我嘗試了一些不同的選項來獲取我感興趣的頁面元素:

# This is meant to give you the first instance of Case Status, which in the case of this page is "CLOSED".

case_status_template = html_soup.head.find('div', {"id" : "maincontain"}, 
class_ = "container").div.find('table', class_ = "table-bordered").find('strong').text

# I wasn't able to get the remaining Case Statuses with find_next_sibling or find_all, so I used a different method:

for table in html_soup.find_all('table', class_= "table-bordered"):
    print(table.text)

# This gave me the output I needed (i.e. the Case Status for all five records on the page), 
# but didn't give me the structure I wanted and didn't really allow me to connect to the other data on the page.

# I was also able to get to the same place with another page element, Inspection Details.
# This is the information reflected on the page after "Inspection: ", directly below Case Status.

insp_details_template = html_soup.head.find('div', {"id" : "maincontain"}, 
class_ = "container").div.find('table', class_ = "table-unbordered")

for div in html_soup.find_all('table', class_ = "table-unbordered"):
    print(div.text)

# Unfortunately, although I could get these two pieces of information to print,
# I realized I would have a hard time getting the rest of the information for each record.
# I also knew that it would be hard to connect/roll all of these up at the record level.

所以,我嘗試了一種稍微不同的方法。 通過專注於具有單個檢查記錄的該頁面的版本,我想也許我可以使用以下代碼來破解它:

url = 'https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015'
response = get(url)
html_soup = BeautifulSoup(response.text, 'html.parser')
first_table = html_soup.find('table', class_ = "table-borderedu")
first_table_rows = first_table.find_all('tr')

for tr in first_table_rows:
    td = tr.find_all('td')
    row = [i.text for i in td]
    print(row)

# Then, actually using pandas to get the data into a df and out as a .csv.

dfs_osha = pd.read_html('https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015',header=1)
for df in dfs_osha:
    print(df)

path = r'~\foo'
dfs_osha = pd.read_html('https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015',header=1)
for df[1,3] in dfs_osha:
    df.to_csv(os.path.join(path,r'osha_output_table1_012320.csv'))

# This worked better, but didn't actually give me all of the data on the page,
# and wouldn't be replicable for the other four inspection records I'm interested in.

所以,最后,我在這里找到了一個非常方便的例子: https : //levelup.gitconnected.com/quick-web-scraping-with-python-beautiful-soup-4dde18468f1f 我試圖解決它,並且已經提出了這個代碼:

for elem in all_content_raw_lxml:
    wrappers = elem.find_all('div', class_ = "row-fluid")
    for x in wrappers:
        case_status = x.find('div', class_ = "text-center")
        print(case_status)
        insp_details = x.find('div', class_ = "table-responsive")
        for tr in insp_details:
            td = tr.find_all('td')
            td_row = [i.text for i in td]
            print(td_row)
        violation_items = insp_details.find_next_sibling('div', class_ = "table-responsive")
        for tr in violation_items:
            tr = tr.find_all('tr')
            tr_row = [i.text for i in tr]
            print(tr_row)
        print('---------------')

不幸的是,我遇到了太多的錯誤而無法使用它,所以我被迫放棄該項目,直到我得到一些進一步的指導。 希望到目前為止我共享的代碼至少顯示了我付出的努力,即使它對獲得最終輸出沒有多大作用! 謝謝。

對於這種類型的頁面,您實際上並不需要 beautifulsoup; 熊貓就夠了。

url = 'your url above'
import pandas as pd
#use pandas to read the tables on the page; there are lots of them...
tables = pd.read_html(url)

#Select from this list of tables only those tables you need:
incident = [] #initialize a list of inspections
for i, table in enumerate(tables): #we need to find the index position of this table in the list; more below       
    if table.shape[1]==5: #all relevant tables have this shape
        case = [] #initialize a list of inspection items you are interested in       
        case.append(table.iat[1,0]) #this is the location in the table of this particular item
        case.append(table.iat[1,2].split(' ')[2]) #the string in the cell needs to be cleaned up a bit...
        case.append(table.iat[9,1])
        case.append(table.iat[12,3])
        case.append(table.iat[13,3])
        case.append(tables[i+2].iat[0,1]) #this particular item is in a table which 2 positions down from the current one; this is where the index position of the current table comes handy
        incident.append(case)        


columns = ["inspection_id",   "open_date",   "inspection_type", "close_conference",    "close_case",  "violations_serious_initial"]
df2 = pd.DataFrame(incident,columns=columns)
df2 

輸出(請原諒格式):

    inspection_id   open_date   inspection_type close_conference    close_case  violations_serious_initial
0   Nr: 1285328.015 12/28/2017  Referral    12/28/2017  06/21/2018  2
1   Nr: 1283809.015 12/18/2017  Complaint   12/18/2017  05/24/2018  5
2   Nr: 1284178.015 12/18/2017  Accident    05/17/2018  09/17/2018  1
3   Nr: 1283549.015 12/13/2017  Referral    12/13/2017  05/22/2018  3
4   Nr: 1282631.015 12/12/2017  Fat/Cat 12/12/2017  11/16/2018  1

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM