[英]Is it possible to stop exec or eval in Python after a certain amount of time?
[英]why request stop after a certain time in python?
我有这个代码,它的function是发送Jetta类型的请求,从请求中带上文本,从文本文件中读取网站链接,问题是发送300或500个请求后,脚本停止而不显示任何错误,它只是停止工作?
import requests
sites = open(r'site.txt', 'r', encoding="utf8").readlines()
l_site = []
for i in sites:
l_site.append(i)
for x in len(l_site):
result = requests.get(f'{site}', allow_redirects=True).text
open('result.txt', 'a').write(f'{result}\n')
如果我理解正确,这就是你想要的:
site.txt
读取result.txt
这是一段运行的代码。 请注意,如果您想捕获更多类型的错误,您可以更改except
部分。
import requests
URLS_FILE = 'site.txt'
RESULT_FILE = 'result.txt'
ERRORS_FILE = 'result-error.txt'
def handle_url(url: str, result_file, error_file):
try:
# 10 seconds timeout, not download time, but time to get an HTTP response
content = requests.get(url, allow_redirects=True, timeout=10)
result_file.write(f'{content.text}\n')
except requests.exceptions.ConnectTimeout as e:
error_file.write(f'{url}: {e}\n')
with open(URLS_FILE, 'r', encoding="utf8") as f:
with open(RESULT_FILE, 'a') as rf:
with open(ERRORS_FILE, 'a') as ef:
for url in f.readlines():
handle_url(url, rf, ef)
我认为您的 function 比您在此处输入的内容更多,因为我看不到site
变量的创建位置。
您可以按照这些思路做一些事情,以更好地了解它在哪里停止。
import requests
sites = open(r'site.txt', 'r', encoding="utf8").readlines()
l_site = [s for s in sites]
with open('result.txt', 'a') as fb:
for site in l_site:
try:
print(f"Processing {site}")
result = requests.get(f'{site}', allow_redirects=True).text
fb.write(f'{result}\n')
except Exception as e:
raise e
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.