简体   繁体   中英

Python Selenium with BeautifulSoup for multiple links

i want to extract links from multiple web pages.Everything works fine for extract but for multiple urls first url getting twice and last one not getting.What is the reason for this?

import re
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import csv
from bs4 import BeautifulSoup

URLs = ["https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/1","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/2",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/3","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/4","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/5",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/6","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/7"]

driver = webdriver.Chrome(ChromeDriverManager().install())

file = open('linkler.csv', 'w+', newline='')
writer = csv.writer(file)
writer.writerow(['linkler'])


for link in URLs:
  driver.get(link)

  html_source = driver.page_source

  soup = BeautifulSoup(html_source, "html.parser")

  for links in soup.findAll('a', attrs={'href': re.compile("^/soccer/turkey/super-lig-2019-2020/")}):
    writer.writerow([links.get('href')])


driver.quit()

after a lot of scan i get the problem, the site is blocking ur requests if there's no rest time so i fix it by adding sleep time ! now your code will work fine i test it !

import re
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import csv
from bs4 import BeautifulSoup
import time

URLs = ["https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/1",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/2",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/3",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/4",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/5",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/6",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/7"]

driver = webdriver.Chrome(ChromeDriverManager().install())

file = open('linkler.csv', 'w+', newline='')
writer = csv.writer(file)
writer.writerow(['linkler'])

for link in URLs:
    driver.get(link)
    time.sleep(5)
    html_source = driver.page_source

    soup = BeautifulSoup(html_source, "html.parser")

    for links in soup.findAll('a', attrs={'href': re.compile("^/soccer/turkey/super-lig-2019-2020/")}):
        writer.writerow([links.get('href')])

driver.quit()

在此处输入图像描述

What happens?

Getting some duplicates is caused by duplicates on the sites and your matching regex , so Script works as designed - Good news you can fix that;)

How to avoid writing duplicates?

Create a list that contains only unique href and check if new scraped href is in or not. In case that not write it to csv and also update the list (It is also possible, to write list later to csv.)

Example

...
file = open('linkler.csv', 'w+', newline='')
writer = csv.writer(file)
writer.writerow(['linkler'])

hrefList = []

for link in URLs:
    driver.get(link)

    html_source = driver.page_source

    soup = BeautifulSoup(html_source, "html.parser")
    
    for links in soup.findAll('a', attrs={'href': re.compile("^/soccer/turkey/super-lig-2019-2020/")}):
        if links.get('href') not in hrefList:
            hrefList.append(links.get('href'))
            writer.writerow([links.get('href')])

file.close()
...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM