简体   繁体   中英

How to crawl website slower in Python with Jupyter Notebook?

My current python script would perform web scraping on the website in one second with 2 pages. I want to make it go slower, like 25 seconds on one page. How do I do that?

I tried this following python script.

# Dependencies
from bs4 import BeautifulSoup
import requests
import pandas as pd

# Testing
linked = 'https://www.zillow.com/homes/for_sale/San-Francisco-CA/fsba,fsbo,fore,new_lt/house_type/20330_rid/globalrelevanceex_sort/37.859675,-122.285557,37.690612,-122.580815_rect/11_zm/{}_p/0_mmm/'
for link in [linked.format(page) for page in range(1,2)]:
    user_agent = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
    headers = {'User-Agent': user_agent}
    response = requests.get(link, headers=headers)
    soup = BeautifulSoup(response.text, 'html.pafinite-item')
print(soup)

What should I add to my script to make the web scraping go slower?

Just use time.sleep :

import requests
import pandas as pd

from time import sleep
from bs4 import BeautifulSoup

linked = 'https://www.zillow.com/homes/for_sale/San-Francisco-CA/fsba,fsbo,fore,new_lt/house_type/20330_rid/globalrelevanceex_sort/37.859675,-122.285557,37.690612,-122.580815_rect/11_zm/{}_p/0_mmm/'

for link in [linked.format(page) for page in range(1,2)]:
    sleep(25.0)
    user_agent = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
    headers = {'User-Agent': user_agent}
    response = requests.get(link, headers=headers)
    soup = BeautifulSoup(response.text, 'html.pafinite-item')
print(soup)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM