简体   繁体   English

元素未响应 Python 请求

[英]Element is not in response Python Requests

I would to scrape the last odds in archive from this page https://www.betexplorer.com/soccer/estonia/esiliiga/elva-flora-tallinn/Q9KlbwaJ/ but I can't get it with requests.我想从这个页面https://www.betexplorer.com/soccer/estonia/esiliiga/elva-flora-tallinn/Q9KlbwaJ/抓取存档中的最后一个赔率,但我无法通过请求得到它。 How can I get it without interact with Selenium ?如何在不与Selenium交互的情况下获得它? To trigger the archive odds page in the Developer Tools I need to hover on the odd.要触发开发者工具中的存档赔率页面,我需要将鼠标悬停在赔率上。 在此处输入图片说明

在此处输入图片说明

Code代码

 url = "https://www.betexplorer.com/archive-odds/4l4ubxv464x0xc78lr/14/"
 headers = {
            "Referer": "https://www.betexplorer.com",
                    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'
               }
Json = requests.get(url, headers=headers).json()

As the site is being loaded by JavaScript, requests doesn't work.由于该站点正在由 JavaScript 加载,因此requests不起作用。 I have used selenium to load the page, extract the complete source code after everything is loaded.我已经使用selenium加载页面,在加载完所有内容后提取完整的源代码。

Then used beautifulsoup to create a soup object to get required data.然后使用beautifulsoup创建一个soup对象来获取所需的数据。

From the source code you can see that the data-bid of the <tr> are what are being passed to get the odds data.从源代码中您可以看到<tr>data-bid正在传递以获取odds数据。

I extracted all the data-bid and passed them to the URL you've provided at the very end of your question one by one.我提取了所有data-bid并将它们一一传递到您在问题最后提供的网址。

This code will get all the odds data in JSON format此代码将获取 JSON 格式的所有赔率数据

import time
from bs4 import BeautifulSoup
import requests
from selenium import webdriver

base_url = 'https://www.betexplorer.com/soccer/estonia/esiliiga/elva-flora-tallinn/Q9KlbwaJ/'
driver = webdriver.Chrome()
driver.get(base_url)

time.sleep(5)

soup = BeautifulSoup(driver.page_source, 'html.parser')
t = soup.find('table', attrs= {'id': 'sortable-1'})
trs = t.find('tbody').findAll('tr')

for i in trs:
    data_bid = i['data-bid']
    url = f"https://www.betexplorer.com/archive-odds/4l4ubxv464x0xc78lr/{data_bid}/"
    headers = {"Referer": "https://www.betexplorer.com",'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'}
    Json = requests.get(url, headers=headers).json()
    
    # Do what you wish to do withe JSON data here....

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM