简体   繁体   中英

Cannot Scroll Webpage by Selenium Webdriver or even Javascript Executor

`from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
import time

driver = webdriver.Chrome('chromedriver.exe')

driver.get('https://iremedy.com/search?query=Vital%20Signs%20Monitors')

time.sleep(5)

element = driver.find_element(By.CLASS_NAME, 'body').send_keys(Keys.END)`

I have tried various methods but none are working. Kindly help me.

This is one way of scrolling that page and loading the items:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import pandas as pd
import time as t

chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('disable-notifications')

chrome_options.add_argument("window-size=1280,720")

webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
wait = WebDriverWait(browser, 20)
url = 'https://iremedy.com/search?query=Vital%20Signs%20Monitors'
browser.get(url)
items_list = []
while True:
    elements_on_page = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^="card"]')))
    print(len(elements_on_page), 'total items found')
    if len(elements_on_page) > 100:
        print('more than 100 found, stopping')
        break
    footer = wait.until (EC.presence_of_element_located((By.CSS_SELECTOR, 'footer[id="footer"]')))
    footer.location_once_scrolled_into_view
    t.sleep(2)
for el in elements_on_page:
    title = el.find_element(By.CSS_SELECTOR, 'h3[class="title"]')
    price = el.find_element(By.CSS_SELECTOR, 'div[class="price"]')
    items_list.append((title.text.strip(), price.text.strip()))
df = pd.DataFrame(items_list, columns = ['Item', 'Price'])
print(df)

The result printed in terminal will be:

10 total items found
20 total items found
20 total items found
30 total items found
30 total items found
40 total items found
50 total items found
60 total items found
70 total items found
80 total items found
90 total items found
100 total items found
110 total items found
more than 100 found, stopping
Item    Price
0   Edan M3A Vital Signs Monitors   $2,714.95
1   M3 Vital Signs Monitors by Edan Instruments $2,476.95
2   Vital Signs Patient Monitors - Touch Screen $2,015.95
3   RVS-100 Advanced Vital Signs Monitors by Riester    $362.95
4   Edan iM80 Vital Signs Patient Monitors  $5,291.95
... ... ...
105 Patient Monitor Connex® Vital Signs Monitoring...   $10,571.95
106 Patient Monitor Connex® Spot Check and Vital S...   $5,089.95
107 Patient Monitor Connex® Spot Check and Vital ...    $5,964.95
108 Patient Monitor X Series® Vital Signs Monitori...   $48,978.95
109 Patient Monitor Connex® Spot Check and Vital S...   $4,391.95
110 rows × 2 columns

I'm breaking the loop once I reach 100, you can go higher.. Important to note there is another way to obtain that data as well, by scraping the GraphQL endpoint where the data is being pulled from. Nonetheless, this is how you do it with Selenium. For documentation, please see https://www.selenium.dev/documentation/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM