![](/img/trans.png)
[英]Fetching information from the different links on a web page and writing them to a .xls file using pandas,bs4 in Python
[英]Fetching information from a web page and and writing into a .xls file using pandas and bs4
我是 Python 編程的初學者。 我正在使用 python 中的 bs4 模塊練習網頁抓取。
我從網頁中提取了一些字段,但是當我嘗試將它們寫入 .xls 文件時,除了標題外,.xls 文件仍然是空的。 請告訴我哪里做錯了,如果可能的話,建議要做什么。
from bs4 import BeautifulSoup as bs
import pandas as pd
res = requests.get('https://rwbj.com.au/find-an-agent.html')
soup = bs(res.content, 'lxml')
data = soup.find_all("div",{"class":"fluidgrid-cell fluidgrid-cell-2"})
records = []
name =[]
phone =[]
email=[]
title=[]
location=[]
for item in data:
name = item.find('h3',class_='heading').text.strip()
phone = item.find('a',class_='text text-link text-small').text.strip()
email = item.find('a',class_='text text-link text-small')['href']
title = item.find('div',class_='text text-small').text.strip()
location = item.find('div',class_='text text-small').text.strip()
records.append({'Names': name, 'Title': title, 'Email': email, 'Phone': phone, 'Location': location})
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
df=df.drop_duplicates()
df.to_excel(r'C:\Users\laptop\Desktop\R&W.xls', sheet_name='MyData2', index = False, header=True)
如果您不想使用 selenium,那么您可以制作與發布請求網頁相同的網頁。 這將為您提供一個xml
響應,您可以使用Beautifulsoup
對其進行解析以獲得您需要的輸出。
我們可以使用檢查工具中的網絡選項卡來獲取正在發出的請求以及此請求的表單數據。
接下來,我們必須使用python-requests
發出相同python-requests
並解析輸出。
import requests
from bs4 import BeautifulSoup
import pandas as pd
number_of_agents_required=20 # they only have 20 on the site
payload={
'act':'act_fgxml',
'15[offset]':0,
'15[perpage]':number_of_agents_required,
'require':0,
'fgpid':15,
'ajax':1
}
records=[]
r=requests.post('https://www.rwbj.com.au/find-an-agent.html',data=payload)
soup=BeautifulSoup(r.text,'lxml')
for row in soup.find_all('row'):
name=row.find('name').text
title=row.position.text.replace('&','&')
email=row.email.text
phone=row.phone.text
location=row.office.text
records.append([name,title,email,phone,location])
df=pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
df.to_excel('R&W.xls', sheet_name='MyData2', index = False, header=True)
輸出:
您可以使用像 selenium 這樣的方法來允許 javascript 呈現內容。 然后您可以獲取 page_source 以繼續您的腳本。 我特意保留了你的腳本,只為等待內容添加了新行。
您可以無頭運行 selenium 或改用 HTMLSession。
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
d = webdriver.Chrome()
d.get('https://rwbj.com.au/find-an-agent.html')
WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h3")))
soup = bs(d.page_source, 'lxml')
d.quit()
data = soup.find_all("div",{"class":"fluidgrid-cell fluidgrid-cell-2"})
records = []
name =[]
phone =[]
email=[]
title=[]
location=[]
for item in data:
name = item.find('h3',class_='heading').text.strip()
phone = item.find('a',class_='text text-link text-small').text.strip()
email = item.find('a',class_='text text-link text-small')['href']
title = item.find('div',class_='text text-small').text.strip()
location = item.find('div',class_='text text-small').text.strip()
records.append({'Names': name, 'Title': title, 'Email': email, 'Phone': phone, 'Location': location})
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
print(df)
我可能會考慮,取決於每個人是否存在所有項目,例如:
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
import pandas as pd
options = Options()
options.headless = True
d = webdriver.Chrome(options = options)
d.get('https://rwbj.com.au/find-an-agent.html')
WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h3")))
soup = bs(d.page_source, 'lxml')
d.quit()
names = [item.text for item in soup.select('h3')]
titles = [item.text for item in soup.select('h3 ~ div:nth-of-type(1)')]
tels = [item.text for item in soup.select('h3 + a')]
emails = [item['href'] for item in soup.select('h3 ~ a:nth-of-type(2)')]
locations = [item.text for item in soup.select('h3 ~ div:nth-of-type(2)')]
records = list(zip(names, titles, tels, emails, positions))
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
print(df)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.