[英]Scraping Multiple Web Pages using Python
我想從多個網站使用相同的URL,如刮https://woollahra.ljhooker.com.au/our-team , https://chinatown.ljhooker.com.au/our-team和https://開頭邦迪海灘。 ljhooker.com.au/our-team 。
我已經編寫了適用於第一個網站的腳本,但是我不確定如何告訴它從其他兩個網站抓取。
我的代碼:
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
my_url = "https://woollahra.ljhooker.com.au/our-team"
page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("div", {"class":"team-details"})
for container in containers:
agent_name = container.findAll("div", {"class":"team-name"})
name = agent_name[0].text
phone = container.findAll("span", {"class":"phone"})
mobile = phone[0].text
print("name: " + name)
print("mobile: " + mobile)
有沒有一種方法可以簡單地列出URL的不同部分(woollahra,唐人街,bondibeach),以便該腳本將使用我已經編寫的代碼遍歷每個網頁?
locations = ['woollahra', 'chinatown', 'bondibeach']
for location in locations:
my_url = 'https://' + location + '.ljhooker.com.au/our-team'
然后是其余的代碼,這些代碼將遍歷列表的每個元素,以后可以添加更多位置
你只想要一個循環
for team in ["woollahra", "chinatown", "bondibeach"]:
my_url = "https://{}.ljhooker.com.au/our-team".format(team)
page_soup = soup(page_html, "html.parser")
# make sure you indent the rest of the code
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.