[英]ConnectionResetError: [Errno 104] Connection reset by peer from specific website | Python3
I am trying to check if a link is broken or not.我正在尝试检查链接是否已损坏。 For that I am sending an element (a link) from a list of dictionaries with a while loop and I am using urllib.request.
为此,我使用 while 循环从字典列表中发送一个元素(一个链接),并且我正在使用 urllib.request。 The goal is to remove only broken links from the list.
目标是仅从列表中删除损坏的链接。 List contains links to different articles from https://jam.network.com/ and I want to be able to download articles that exist.
列表包含来自https://jam.network.com/的不同文章的链接,我希望能够下载存在的文章。
However I am getting a ConnectionResetError: [Errno 104] Connection reset by peer .但是我收到ConnectionResetError: [Errno 104] Connection reset by peer 。 I only get that error when I am trying to test links from https://jam.network.com/ , and every page on https://jam.network.com/ , but code seems to work fine for other websites.
当我尝试测试来自https://jam.network.com/ 的链接以及 https://jam.network.com/上的每个页面时,我只会收到该错误,但代码似乎适用于其他网站。
My question is: am I missing something here or is it server side issue?我的问题是:我在这里遗漏了什么还是服务器端问题?
Here is my code (python3):这是我的代码(python3):
import urllib.request
i = 0
while i < (len(dicts)):
url = dicts[i]['link']
try:
with urllib.request.urlopen(url) as f:
status = f.getcode()
i += 1
except:
del dicts[i]
Here is a traceback:这是回溯:
https://jamanetwork.com/
---------------------------------------------------------------------------
ConnectionResetError Traceback (most recent call last)
<ipython-input-59-8d93b45dbd14> in <module>()
22 print(url)
23
---> 24 with urllib.request.urlopen("https://jamanetwork.com/") as f:
25 status = f.getcode()
26 print(status)
12 frames
/usr/lib/python3.6/ssl.py in read(self, len, buffer)
629 """
630 if buffer is not None:
--> 631 v = self._sslobj.read(len, buffer)
632 else:
633 v = self._sslobj.read(len)
ConnectionResetError: [Errno 104] Connection reset by peer
Any suggestions are appreciated, thanks!任何建议表示赞赏,谢谢!
Based on this answer .基于这个答案。 You can't resolve server error.
您无法解决服务器错误。 But, you can handle it.
但是,你可以处理它。
So you can't do anything about it, it is the issue of the server.
所以你也无能为力,是服务器的问题。 But you could use try.. except block to handle that exception:
但是您可以使用 try..except 块来处理该异常:
Try this code:试试这个代码:
import urllib.request
i = 0
while i < (len(dicts)):
url = dicts[i]['link']
try:
f = urllib.request.urlopen(url)
except:
del dicts[i]
else:
with f:
status = f.getcode()
i += 1
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.