繁体   English   中英

如何使用Jupyter Notebook在Python中缓慢地抓取网站?

[英]How to crawl website slower in Python with Jupyter Notebook?

我当前的python脚本将在一秒钟内用2页在网站上执行Web抓取。 我想使其变慢,例如一页上显示25秒。 我怎么做?

我尝试了以下python脚本。

# Dependencies
from bs4 import BeautifulSoup
import requests
import pandas as pd

# Testing
linked = 'https://www.zillow.com/homes/for_sale/San-Francisco-CA/fsba,fsbo,fore,new_lt/house_type/20330_rid/globalrelevanceex_sort/37.859675,-122.285557,37.690612,-122.580815_rect/11_zm/{}_p/0_mmm/'
for link in [linked.format(page) for page in range(1,2)]:
    user_agent = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
    headers = {'User-Agent': user_agent}
    response = requests.get(link, headers=headers)
    soup = BeautifulSoup(response.text, 'html.pafinite-item')
print(soup)

我应该在脚本中添加什么以使网络抓取速度变慢?

只需使用time.sleep

import requests
import pandas as pd

from time import sleep
from bs4 import BeautifulSoup

linked = 'https://www.zillow.com/homes/for_sale/San-Francisco-CA/fsba,fsbo,fore,new_lt/house_type/20330_rid/globalrelevanceex_sort/37.859675,-122.285557,37.690612,-122.580815_rect/11_zm/{}_p/0_mmm/'

for link in [linked.format(page) for page in range(1,2)]:
    sleep(25.0)
    user_agent = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
    headers = {'User-Agent': user_agent}
    response = requests.get(link, headers=headers)
    soup = BeautifulSoup(response.text, 'html.pafinite-item')
print(soup)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM