简体   繁体   English

Python Web 用 Selenium 刮擦 | 并行执行(多线程)

[英]Python Web Scraping with Selenium | Parallel execution (Multi-threading)

I have a use case for which I'm unable to develop a logic.我有一个无法开发逻辑的用例。 Floating it here for recommendations from experts.将其浮动在这里以征求专家的建议。

Quick context:快速上下文:
I have a list of 2,500 URLs.我有一个包含 2,500 个 URL 的列表。 I am able to scrape them sequentially using Python and Selenium.我可以使用 Python 和 Selenium 按顺序刮取它们。
Run time for 1,000 URLs is approximately 1.5 hours 1,000 个 URL 的运行时间约为 1.5 小时

What I am trying to achieve:我想要达到的目标:
I am trying to optimize the run time through parallel execution.我正在尝试通过并行执行来优化运行时间。 I had reviewed various posts on stack overflow.我已经查看了有关堆栈溢出的各种帖子。 Somehow I am unable to find the missing pieces of the puzzle.不知何故,我无法找到拼图的缺失部分。

Details细节

  1. I need to reuse the drivers, instead of closing and reopening them for every URL.我需要重用驱动程序,而不是为每个 URL 关闭并重新打开它们。 I came across a post Python selenium multiprocessing that leverages threading.local().我遇到了一篇文章Python selenium 多处理,它利用了 threading.local()。 Somehow the number of drivers that are opened exceed the number of threads specified if I rerun the same code如果我重新运行相同的代码,以某种方式打开的驱动程序数量超过了指定的线程数量

  2. Please note that the website requires the user to login using user name and password.请注意,该网站要求用户使用用户名和密码登录。 My objective is to launch the drivers (say 5 drivers) the first time and login.我的目标是第一次启动驱动程序(比如 5 个驱动程序)并登录。 I would like to continue reusing the same drivers for all future URLs without having to close the drivers and logging in again我想继续为所有未来的 URL 重用相同的驱动程序,而不必关闭驱动程序并再次登录

  3. Also, I am new to Selenium web scraping.另外,我是 Selenium web 刮的新手。 Just getting familiar with the basics.只是熟悉基础知识。 Multi-threading is uncharted territory.多线程是未知领域。 I would really appreciate your help here非常感谢您的帮助

Sharing my code snippet below:在下面分享我的代码片段:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import pandas as pd
from multiprocessing.dummy import Pool as ThreadPool



threadLocal = threading.local()


# Function to open web driver
def get_driver():
    options = Options()
    driver = webdriver.Chrome(<Location to chrome driver>, options = options)    
    return driver


# Function to login to website & scrape from website
def parse_url(url):
    driver = get_driver()
    login_url = "https://..."
    driver.get(login_url)

    # Enter user ID
    # Enter password
    # Click on Login button

    # Open web page of interest & scrape
    driver.get(url)
    htmltext = driver.page_source
    htmltext1 = htmltext[0:100]
    return [url, htmltext1]
    

# Function for multi-threading
def main():
    urls = ["url1",
            "url2",
            "url3",
            "url4"]

    pool = ThreadPool(2)
    records = pool.map(parse_url, urls)
    pool.close()
    pool.join()
    
    return records


if __name__ =="__main__":
    result = pd.DataFrame(columns = ["url", "html_text"], data = main())

How can I modify the above code such that:我怎样才能修改上面的代码,这样:

  1. I end up reusing my drivers我最终重用了我的驱动程序
  2. Login to the website only once & scrape multiple URLs in parallel仅登录一次网站并同时抓取多个 URL

I believe that starting browsers in separate processes and communicate with him via queue is a good approach (and more scalable).我相信在单独的进程中启动浏览器并通过队列与他通信是一种很好的方法(并且更具可扩展性)。 Process can be easily killed and respawned if something went wrong.如果出现问题,进程很容易被杀死并重生。 The pseudo-code might look like this:伪代码可能如下所示:

#  worker.py 
def entrypoint(in_queue, out_queue):  # run in process
    crawler = Crawler()
    browser = Browser() # init, login and etc.
    while not stop:
        command = in_queue.get()
        result = crawler.handle(command, browser)
        out_queue.put(result)            

# main.py
import worker

in_queue, out_queue = create_queues()
create_process(worker.entrypoint, args=(in_queue, out_queue))
while not stop:
    in_queue.put(new_task)
    result = out_queue.get()

I know its too late to answer this question but, i am gonna drop the code snippet which does the job for someone who needs it.我知道现在回答这个问题为时已晚,但是,我将放弃为需要它的人完成工作的代码片段。

drivers_dict={}
#We are trying to make driver instance for each of the thread, so that we can reuse it.    
def scraping_function(link):
        try:
            thread_name= threading.current_thread().name
            #sometime we are going to have different thread name in each iteration so a little regex might help
            thread_name = re.sub("ThreadPoolExecutor-(\d*)_(\d*)", r"ThreadPoolExecutor-0_\2", thread_name)
            print(f"re.sub -> {thread_name}")
            driver = drivers_dict[thread_name]
        except KeyError:
            drivers_dict[threading.current_thread().name] = webdriver.Chrome(PATH, options=chrome_options)
            driver = drivers_dict[threading.current_thread().name]
        driver.get(link)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM