繁体   English   中英

如何使用 Python、Selenium 和 BeautifulSoup 抓取 JSP?

[英]How do I web-scrape a JSP with Python, Selenium and BeautifulSoup?

我是一个绝对的初学者,正在尝试使用 Python 进行网络抓取。 我正在尝试从此 URL 中提取 ATM 的位置:

https://www.visa.com/atmlocator/mobile/index.jsp#(page:results,params:(query:'Tokyo,%20Japan'))

使用以下代码。

#Script to scrape locations and addresses from VISA's ATM locator


# import the necessary libraries (to be installed if not available):

from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd


#ChromeDriver
#(see https://chromedriver.chromium.org/getting-started as reference)

driver = webdriver.Chrome("C:/Users/DefaultUser/Local Settings/Application Data/Google/Chrome/Application/chromedriver.exe")

offices=[] #List to branches/ATM names
addresses=[] #List to branches/ATM locations
driver.get("https://www.visa.com/atmlocator/mobile/index.jsp#(page:results,params:(query:'Tokyo,%20Japan'))") 


content = driver.page_source
soup = BeautifulSoup(content, features = "lxml")


#the following code extracts all the content inside the tags displaying the information requested

for a in soup.findAll('li',attrs={'class':'visaATMResultListItem'}): 
    name=a.find('li', attrs={'class':'data-label'}) 
    address=a.find('li', attrs={'class':'data-label'}) 
    offices.append(name.text)
    addresses.append(address.text)


#next row defines the dataframe with the results of the extraction

df = pd.DataFrame({'Office':offices,'Address':addresses})


#next row displays dataframe content

print(df)


#export data to .CSV file named 'branches.csv'
with open('branches.csv', 'a') as f:
    df.to_csv(f, header=True)

该脚本起初似乎工作正常,因为 Chromedriver 启动并在浏览器中按要求显示结果,但没有返回任何结果:

Empty DataFrame
Columns: [Office, Address]
Index: []
Process finished with exit code 0

也许我在选择选择器时犯了一个错误?

非常感谢您的帮助

问题在于定位器,请使用

for a in soup.findAll('li',attrs={'class':'visaATMResultListItem'}): 
    name = a.find('p', attrs={'class':'visaATMPlaceName '}) 
    address = a.find('p', attrs={'class':'visaATMAddress'}) 
    offices.append(name.text)
    addresses.append(address.text)
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
import time
from bs4 import BeautifulSoup
import csv

options = Options()
options.add_argument('--headless')

driver = webdriver.Firefox(options=options)
driver.get("https://www.visa.com/atmlocator/mobile/index.jsp#(page:results,params:(query:'Tokyo,%20JAPAN'))")
time.sleep(2)

soup = BeautifulSoup(driver.page_source, 'html.parser')

na = []
addr = []
for name in soup.findAll("a", {'class': 'visaATMPlaceLink'}):
    na.append(name.text)
for add in soup.findAll("p", {'class': 'visaATMAddress'}):
    addr.append(add.get_text(strip=True, separator=" "))

with open('out.csv', 'w', newline="") as f:
    writer = csv.writer(f)
    writer.writerow(['Name', 'Address'])
    for _na, _addr in zip(na, addr):
        writer.writerow([_na, _addr])

driver.quit()

输出: 点击这里

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM