简体   繁体   中英

Disable SSL certificate verification in Scrapy

I am currently struggling with an issue I am having with Scrapy. Whenever I used Scrapy to scrape an HTTPS site where the certificate's CN value matches the server's domain name, Scrapy works great! On the other hand, though, whenever I try scraping a site where the certificate's CN value does NOT match the server's domain name, I get the following:

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/twisted/protocols/tls.py", line 415, in dataReceived
    self._write(bytes)
  File "/usr/local/lib/python2.7/dist-packages/twisted/protocols/tls.py", line 554, in _write
    sent = self._tlsConnection.send(toSend)
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 1270, in send
    result = _lib.SSL_write(self._ssl, buf, len(buf))
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 926, in wrapper
    callback(Connection._reverse_mapping[ssl], where, return_code)
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_sslverify.py", line 1055, in infoCallback
    return wrapped(connection, where, ret)
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_sslverify.py", line 1154, in _identityVerifyingInfoCallback
    verifyHostname(connection, self._hostnameASCII)
  File "/usr/local/lib/python2.7/dist-packages/service_identity/pyopenssl.py", line 30, in verify_hostname
    obligatory_ids=[DNS_ID(hostname)],
  File "/usr/local/lib/python2.7/dist-packages/service_identity/_common.py", line 235, in __init__
    raise ValueError("Invalid DNS-ID.")
exceptions.ValueError: Invalid DNS-ID.

I have looked through as much documentation as I can, and as far as I can tell Scrapy does not have a way to disable SSL certificate verification. Even the documentation for the Scrapy Request object (which I would assume is where this functionality would lie) has no reference:

http://doc.scrapy.org/en/1.0/topics/request-response.html#scrapy.http.Request https://github.com/scrapy/scrapy/blob/master/scrapy/http/request/ init .py

There are also no Scrapy settings which address the issue:

http://doc.scrapy.org/en/1.0/topics/settings.html

Short of using Scrapy by source and modifying the source as needed, does anyone have any ideas for how I can disable the SSL certificate verification?

Thank you!

From the documentation you linked for the settings , it looks like you would be able to modify the DOWNLOAD_HANDLERS setting.

From the docs:

"""
    A dict containing the request download handlers enabled by default in
    Scrapy. You should never modify this setting in your project, modify
    DOWNLOAD_HANDLERS instead.
"""

DOWNLOAD_HANDLERS_BASE = {
    'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
    'http': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
    'https': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
    's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
}

Then in your settings, something like this:

""" 
    Configure your download handlers with something custom to override
    the default https handler
"""
DOWNLOAD_HANDLERS = {
    'https': 'my.custom.downloader.handler.https.HttpsDownloaderIgnoreCNError',
}

So by defining a custom handler for the https protocol, you should be able to handle the error you're getting and allow scrapy to continue with its' business.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM