[英]Send request to website for crawl every second
我想每秒鍾抓取一個網站 4 小時,我該怎么做。 我的代碼在下面。
import requests
from bs4 import BeautifulSoup
site = requests.get("http://example.com")
soup =BeautifulSoup(site.text,'html.parser')
r = str(soup).split(",")
update_time = r[0]
price1 = r[2]
price2 = r[3]
print(update_time,price1,price2)
您可以使用time
和threading
模塊
import requests
from threading import Thread
from time import sleep
from bs4 import BeautifulSoup
def scrape():
site = requests.get("http://example.com")
soup =BeautifulSoup(site.text,'html.parser')
r = str(soup).split(",")
update_time = r[0]
price1 = r[2]
price2 = r[3]
print(update_time,price1,price2)
for i in range(14400):
t = Thread(target=scrape)
t.start()
sleep(1)
您可以為此使用計划模塊。
import schedule
import time
import requests
from bs4 import BeautifulSoup
def crawl():
site = requests.get("http://example.com")
soup =BeautifulSoup(site.text,'html.parser')
r = str(soup).split(",")
update_time = r[0]
price1 = r[2]
price2 = r[3]
print(update_time,price1,price2)
schedule.every(1).seconds.do(crawl)
while True:
schedule.run_pending()
time.sleep(1)
四小時窗口可以通過 crontab 或 for 循環實現。
您必須安裝 schedule 模塊才能運行上述腳本
sudo pip install schedule
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.