繁体   English   中英

修复回溯(最近一次通话最近)错误?

[英]Fixing Traceback (most recent call last) error?

我编写了一个程序来提取网页中PDF文件的所有链接。 该程序运行完美,在某些网站上没有错误,例如:

Hussam# python extractPDF.py http://www.cs.odu.edu/~mln/teaching/cs532-s17/test/pdfs.html

输出:

Entered URL:
http://www.cs.odu.edu/~mln/teaching/cs532-s17/test/pdfs.html
Final URL:
http://www.cs.odu.edu/~mln/teaching/cs532-s17/test/pdfs.html
http://www.cs.odu.edu/~mln/pubs/ht-2015/hypertext-2015-temporal-violations.pdf
Size: 2184076
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-annotations.pdf
Size: 622981
http://arxiv.org/pdf/1512.06195
Size: 1748961
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-off-topic.pdf
Size: 4308768
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-stories.pdf
Size: 1274604
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-profiling.pdf
Size: 639001
http://www.cs.odu.edu/~mln/pubs/jcdl-2014/jcdl-2014-brunelle-damage.pdf
Size: 2205546
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-mink.pdf
Size: 1254605
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-arabic-sites.pdf
Size: 709420
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-dictionary.pdf
Size: 2350603

另一方面,如果我尝试以下链接:

Hussam# python extractPDF.py http://www.cs.odu.edu/~mln/pubs/all.html

我得到正确的输出,但最后有一个错误。

Entered URL:
http://www.cs.odu.edu/~mln/pubs/all.html
Final URL:
http://www.cs.odu.edu/~mln/pubs/all.html
http://www.cs.odu.edu/~mln/pubs/tpdl-2016/tpdl-2016-kelly.pdf
Size: 953454
http://www.cs.odu.edu/~mln/pubs/tpdl-2016/tpdl-2016-alam.pdf
Size: 928749
http://www.cs.odu.edu/~mln/pubs/jcdl-2016/jcdl-2016-alam-ipfs.pdf
Size: 516538
http://www.cs.odu.edu/~mln/pubs/jcdl-2016/jcdl-2016-alam-memgator.pdf
Size: 345028
http://www.cs.odu.edu/~mln/pubs/jcdl-2016/jcdl-2016-nwala.pdf
Size: 640173
http://www.cs.odu.edu/~mln/pubs/ht-2015/hypertext-2015-temporal-violations.pdf
Size: 2184076
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-annotations.pdf
Size: 622981
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-off-topic.pdf
Size: 4308768
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-stories.pdf
Size: 1274604
http://www.cs.odu.edu/~mln/pubs/tpdl-2015/tpdl-2015-profiling.pdf
Size: 639001
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-temporal-intention.pdf
Size: 720476
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-mink.pdf
Size: 1254605
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-arabic-sites.pdf
Size: 709420
http://www.cs.odu.edu/~mln/pubs/jcdl-2015/jcdl-2015-dictionary.pdf
Size: 2350603
http://www.cs.odu.edu/~mln/pubs/jcdl-2014/jcdl-2014-kelly-acid.pdf
Size: 541843
http://www.cs.odu.edu/~mln/pubs/jcdl-2014/jcdl-2014-kelly-mink.pdf
Size: 556863
http://www.cs.odu.edu/~mln/pubs/jcdl-2014/jcdl-2014-brunelle-damage.pdf
Size: 2205546
http://www.cs.odu.edu/~mln/pubs/jcdl-2014/jcdl-2014-cartledge-copies.pdf
Size: 1199511
http://www.cs.odu.edu/~mln/pubs/sigcse-2014/web-science-sigcse-2014.pdf
Size: 158242
http://www.cs.odu.edu/~mln/pubs/ecir-2014/ecir-2014.pdf
Size: 902825
http://www.cs.odu.edu/~mln/pubs/ieee-vis-2013/2013-ieee-vis-boxoffice.pdf
Size: 122738
Traceback (most recent call last):
  File "extractPDF.py", line 21, in <module>
    r = urllib2.urlopen(link)
  File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 397, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 510, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 429, in error
    result = self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 369, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 605, in http_error_302
    return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python2.7/urllib2.py", line 397, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 510, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 435, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 369, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 518, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden

这是程序的代码。

import sys
from bs4 import *
import urllib2
import re

if len(sys.argv) != 2:
        print "USAGE:"
        print "Python extracrPDF.py http://example.com/page.html"
else:
        url = sys.argv[1]
        print "Entered URL:"
        print url
        html_page = urllib2.urlopen(url)
        print "Final URL:"
        print html_page.geturl()
        soup = BeautifulSoup(html_page, "html.parser")
        links = []
        for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
                links.append(link.get('href'))
        for link in links:
                r = urllib2.urlopen(link)
                if r.headers['content-type'] == "application/pdf":
                        print link
                        print "Size: " + r.headers['Content-Length']
urllib2.HTTPError: HTTP Error 403: Forbidden

您的代码将提取页面中的所有链接。 这些链接中至少有一个(不一定是PDF链接)对您不可用。 403 Forbidden表示:“服务器理解了请求,但拒绝授权。” 该URL可能要求您具有允许访问的凭据。

urllib2引发错误情况的异常。 您的代码将需要处理其中的一些。

如果只希望代码继续执行而不会死,请将相关部分替换为:

    for link in links:
        r = None
        try:
            r = urllib2.urlopen(link)
        except urllib2.HTTPError as e:
            print link
            print "Error: " + e.code + " " + e.reason
            continue

        if r.headers['content-type'] == "application/pdf":
            print link
            print "Size: " + r.headers['Content-Length']

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM