简体   繁体   English

除一页上的一个CSS列表外,不收集所有请求的数据

[英]Not Scraping all of the requested data except for one CSS list on one page

I am trying to scrape a webpage, however despite giving correct CSS in Chrome inspect Selenium does not scrape all of the data it only scrapes on the odds of the first page as seen below and then gives an error message. 我正在尝试抓取网页,但是,尽管Chrome浏览器提供了正确的CSS检查,但Selenium并未抓取所有数据,它仅抓取第一页的赔率,如下所示,然后给出了错误消息。

I have re-tested the CSS and changed it multiple times however, Selenium Python does not seem to scrape the data correctly. 我已经重新测试过CSS并对其进行了多次更改,但是,Selenium Python似乎无法正确地抓取数据。

I also tend to get: 我也倾向于得到:

Traceback (most recent call last):
  File "C:/Users/Bain3/PycharmProjects/untitled4/Vpalmerbet1.py", line 1365, in <module>
    EC.element_to_be_clickable((By.CSS_SELECTOR, ('.match-pop-market a[href*="/sports/soccer/"]'))))
  File "C:\Users\Bain3\Anaconda3\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
    raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: 

I have tried changing CSS as well as using xpath for: 我尝试过更改CSS以及将xpath用于:

#clickMe = wait(driver, 15).until(EC.element_to_be_clickable((By.CSS_SELECTOR, ('.match-pop-market a[href*="/sports/soccer/"]'))))

clickMe = wait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, ("//*[@class='match-pop-market']//a[href*='/sports/soccer/']"))))

You can see that chrome inspects detects this CSS 您可以看到Chrome检查可以检测到此CSS

在此处输入图片说明

My full code is: 我的完整代码是:

from selenium import webdriver
driver = webdriver.Chrome()
driver.set_window_size(1024, 600)
driver.maximize_window()

try:
    os.remove('vtg121.csv')
except OSError:
    pass

driver.get('https://www.palmerbet.com/sports/soccer')

#SCROLL_PAUSE_TIME = 0.5


from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

#clickMe = wait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, ('//*[@id="TopPromotionBetNow"]'))))
#if driver.find_element_by_css_selector('#TopPromotionBetNow'):
    #driver.find_element_by_css_selector('#TopPromotionBetNow').click()

#last_height = driver.execute_script("return document.body.scrollHeight")

#while True:

    #driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")


    #time.sleep(SCROLL_PAUSE_TIME)


    #new_height = driver.execute_script("return document.body.scrollHeight")
    #if new_height == last_height:
        #break
    #last_height = new_height

time.sleep(1)

clickMe = wait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, ('//*[contains(@class,"filter_labe")]'))))
clickMe.click()
time.sleep(0)
clickMe = wait(driver, 10).until(EC.element_to_be_clickable((By.XPATH,'(//*[contains(@class,"filter_labe")])')))
options = driver.find_elements_by_xpath('//*[contains(@class,"filter_labe")]')

indexes = [index for index in range(len(options))]
shuffle(indexes)
for index in indexes:
    time.sleep(0)
    #driver.get('https://www.bet365.com.au/#/AS/B1/')
    clickMe1 = wait(driver, 10).until(EC.element_to_be_clickable((By.XPATH,'(//ul[@id="tournaments"]//li//input)[%s]' % str(index + 1))))
    clickMe1.click()
    time.sleep(0)
    ##tournaments > li > input
    #//*[@id='tournaments']//li//input

    # Team

#clickMe = wait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,("#mta_row td:nth-child(1)"))))
langs3 = driver.find_elements_by_css_selector("#mta_row   td:nth-child(1)")
langs3_text = []

for lang in langs3:
    print(lang.text)

    langs3_text.append(lang.text)
time.sleep(0)

# Team ODDS
langs = driver.find_elements_by_css_selector("#mta_row   .mpm_teams_cell_click:nth-child(2)   .mpm_teams_bet_val")
langs_text = []

for lang in langs:
    print(lang.text)
    langs_text.append(lang.text)
time.sleep(0)


# HREF
#langs2 = driver.find_elements_by_xpath("//ul[@class='runners']//li[1]")
#a[href*="/sports/soccer/"]
#url1 = driver.current_url

#clickMe = wait(driver, 15).until(EC.element_to_be_clickable((By.CSS_SELECTOR, ('.match-pop-market a[href*="/sports/soccer/"]'))))
clickMe = wait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, ("//*[@class='match-pop-market']//a[href*='/sports/soccer/']"))))
elems = driver.find_elements_by_css_selector('.match-pop-market a[href*="/sports/soccer/"]')
elem_href = []
for elem in elems:
    print(elem.get_attribute("href"))
    elem_href.append(elem.get_attribute("href"))


print(("NEW LINE BREAK"))
import sys
import io


with open('vtg121.csv', 'a', newline='', encoding="utf-8") as outfile:
    writer = csv.writer(outfile)
    for row in zip(langs_text, langs3_text, elem_href):
        writer.writerow(row)
        print(row)

Your XPath is incorrect. 您的XPath不正确。 Note that predicate like [href*="/sports/soccer/"] can be used in CSS selector while in XPath you should use [contains(@href, "/sports/soccer/")] . 注意,像[href*="/sports/soccer/"]这样的谓词可以在CSS选择器中使用,而在XPath中,您应该使用[contains(@href, "/sports/soccer/")] So complete line should be 所以完整的行应该是

from selenium.common.exceptions import TimeoutException

try:
    clickMe = wait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, "//*[@class='match-pop-market']//a[contains(@href, '/sports/soccer/')]")))
    clickMe1.click()
except TimeoutException:
    print("No link was found")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM