簡體   English   中英

無法獲取來自多個頁面的所有鏈接不變的URL

[英]Unable to get all links from multiple pages unchanging url

我想從10頁中獲取所有鏈接,但是我無法單擊第二頁鏈接。 從URL https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import  bs4

from selenium import webdriver
import time

url = "https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All"
driver = webdriver.Chrome("C:\\Users\Ritesh\PycharmProjects\BS\drivers\chromedriver.exe")
driver.get(url)

def getnames(driver):
    soup = bs4.BeautifulSoup(driver.page_source, 'lxml')
    sink = soup.find("div", {"class": "gsc-results gsc-webResult"})
    links = sink.find_all('a')
    for link in links:
        try:
            print(link['href'])
        except:
            print("")

while True:
    getnames(driver)
    time.sleep(5)
    nextpage = driver.find_element_by_link_text("2")
    nextpage.click()
    time.sleep(2)

請幫助我解決這個問題。

由於頁面中包含動態元素,因此您將需要使用硒。 下面的代碼將獲取每個頁面的所有鏈接:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait 
from selenium.webdriver.support import expected_conditions as EC
import time

url = "https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All"
driver = webdriver.Chrome("C:\\Users\Ritesh\PycharmProjects\BS\drivers\chromedriver.exe")
driver.get(url)

WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div""")))


pages_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div/div""")

all_urls = []

for page_index in range(len(pages_links)):

    WebDriverWait(driver, 20).until(
 EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div""")))

    pages_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div/div""")

    page_link = pages_links[page_index]
    print "getting links for page: ", page_link.text

    page_link.click()

    time.sleep(1)


    #wait untill all links are loaded
    WebDriverWait(driver, 20).until(
  EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]""")))

    first_link = driver.find_element_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[1]/div[1]/div[1]/div/a""")

    results_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div/div[1]/div[1]/div/a""")

    urls = [first_link.get_attribute("data-cturl")] + [l.get_attribute("data-cturl") for l in results_links]

    all_urls = all_urls + urls


driver.quit()

您可以按原樣使用此代碼,也可以嘗試與已有的代碼結合使用。

請注意,它不考慮廣告鏈接,因為我認為您不需要它們,對嗎?

讓我知道是否有幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM