简体   繁体   中英

Fetching information from a web page and and writing into a .xls file using pandas and bs4

I am a beginner to Python Programming. I am practicing web scraping using bs4 module in python.

I have extracted some fields from a web page but but while I try to write them into an .xls file, the .xls file remains empty except the headings. Kindly tell where am I doing wrong and if possible suggest what is to be done.

from bs4 import BeautifulSoup as bs
import pandas as pd

res = requests.get('https://rwbj.com.au/find-an-agent.html')
soup = bs(res.content, 'lxml')

data = soup.find_all("div",{"class":"fluidgrid-cell fluidgrid-cell-2"})

records = []
name =[]
phone =[]
email=[]
title=[]
location=[]
for item in data:
    name = item.find('h3',class_='heading').text.strip()
    phone = item.find('a',class_='text text-link text-small').text.strip()
    email = item.find('a',class_='text text-link text-small')['href']
    title = item.find('div',class_='text text-small').text.strip()
    location = item.find('div',class_='text text-small').text.strip()

    records.append({'Names': name, 'Title': title, 'Email': email, 'Phone': phone, 'Location': location})

df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
df=df.drop_duplicates()
df.to_excel(r'C:\Users\laptop\Desktop\R&W.xls', sheet_name='MyData2', index = False, header=True)

If you don't want to use selenium, then you can make the same post request web page makes. This will give you an xml response, which you can parse using Beautifulsoup to get the output you need.

We can use the network tab in the inspect tool to get the request being made as well as the form data for this request.

在此处输入图片说明

Next we have to make the same request using python-requests and the parse the output.

import requests
from bs4 import BeautifulSoup
import pandas as pd
number_of_agents_required=20 # they only have 20 on the site
payload={
'act':'act_fgxml',
'15[offset]':0,
'15[perpage]':number_of_agents_required,
'require':0,
'fgpid':15,
'ajax':1
}
records=[]
r=requests.post('https://www.rwbj.com.au/find-an-agent.html',data=payload)
soup=BeautifulSoup(r.text,'lxml')
for row in soup.find_all('row'):
    name=row.find('name').text
    title=row.position.text.replace('&','&')
    email=row.email.text
    phone=row.phone.text
    location=row.office.text
    records.append([name,title,email,phone,location])
df=pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
df.to_excel('R&W.xls', sheet_name='MyData2', index = False, header=True)

Output:

在此处输入图片说明

You could use a method like selenium to allow for javascript rendering of content. You can then grab the page_source to continue with your script. I have deliberately kept with your script and only added the new lines for the wait for content.

You could run selenium headless or switch to use HTMLSession instead.

from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

d = webdriver.Chrome()
d.get('https://rwbj.com.au/find-an-agent.html')

WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h3")))

soup = bs(d.page_source, 'lxml')
d.quit()
data = soup.find_all("div",{"class":"fluidgrid-cell fluidgrid-cell-2"})

records = []
name =[]
phone =[]
email=[]
title=[]
location=[]
for item in data:
    name = item.find('h3',class_='heading').text.strip()
    phone = item.find('a',class_='text text-link text-small').text.strip()
    email = item.find('a',class_='text text-link text-small')['href']
    title = item.find('div',class_='text text-small').text.strip()
    location = item.find('div',class_='text text-small').text.strip()
    records.append({'Names': name, 'Title': title, 'Email': email, 'Phone': phone, 'Location': location})

df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
print(df)

I might consider, depending on if all items were present for each person, something like:

from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
import pandas as pd

options = Options()
options.headless = True

d = webdriver.Chrome(options = options) 
d.get('https://rwbj.com.au/find-an-agent.html')

WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h3")))

soup = bs(d.page_source, 'lxml')
d.quit()
names = [item.text for item in soup.select('h3')]
titles = [item.text for item in soup.select('h3 ~ div:nth-of-type(1)')]
tels = [item.text for item in soup.select('h3 + a')]
emails = [item['href'] for item in soup.select('h3 ~ a:nth-of-type(2)')]
locations = [item.text for item in soup.select('h3 ~ div:nth-of-type(2)')]      
records = list(zip(names, titles, tels, emails, positions))
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
print(df)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM