简体   繁体   English

为什么scrapy会错过某些链接?

[英]Why does scrapy miss some links?

I am scraping the web-site "www.accell-group.com" using the "scrapy" library for Python. 我正在使用Python的“ scrapy”库抓取网站“ www.accell-group.com”。 The site is scraped completely, in total 131 pages (text/html) and 2 documents (application/pdf) are identified. 该网站被完全刮掉了,共识别出131页(text / html)和2个文档(application / pdf)。 Scrapy did not throw any warnings or errors. Scrapy没有引发任何警告或错误。 My algorithm is supposed to scrape every single link. 我的算法应该抓取每个链接。 I use CrawlSpider. 我使用CrawlSpider。

However, when I look into the page " http://www.accell-group.com/nl/investor-relations/jaarverslagen/jaarverslagen-van-accell-group.htm ", which is reported by "scrapy" as scraped/processed, I see that there are more pdf-documents, for example " http://www.accell-group.com/files/4/5/0/1/Jaarverslag2014.pdf ". 但是,当我查看“ http://www.accell-group.com/nl/investor-relations/jaarverslagen/jaarverslagen-van-accell-group.htm ”页面时,“ scrapy”将其报告为已抓取/处理后,我发现有更多的pdf文档,例如“ http://www.accell-group.com/files/4/5/0/1/Jaarverslag2014.pdf ”。 I cannot find any reasons for it not to be scraped. 我找不到任何不被删除的原因。 There is no dynamic/JavaScript content on this page. 此页面上没有动态/ JavaScript内容。 It is not forbidden in " http://www.airproducts.com/robots.txt ". http://www.airproducts.com/robots.txt ”中不禁止使用。

Do you maybe have any idea why it can happen? 您可能不知道为什么会发生吗? It is maybe because the "files" folder is not in " http://www.accell-group.com/sitemap.xml "? 可能是因为“文件”文件夹不在“ http://www.accell-group.com/sitemap.xml ”中?

Thanks in advance! 提前致谢!

My code: 我的代码:

class PyscrappSpider(CrawlSpider):
    """This is the Pyscrapp spider"""
    name = "PyscrappSpider"

    def__init__(self, *a, **kw):

        # Get the passed URL
        originalURL =  kw.get('originalURL')
        logger.debug('Original url = {}'.format(originalURL))

        # Add a protocol, if needed
        startURL = 'http://{}/'.format(originalURL)
        self.start_urls = [startURL]

        self.in_redirect = {}
        self.allowed_domains = [urlparse(i).hostname.strip() for i in self.start_urls]
        self.pattern = r""
        self.rules = (Rule(LinkExtractor(deny=[r"accessdenied"]), callback="parse_data", follow=True), )

        # Get WARC writer        
        self.warcHandler = kw.get('warcHandler')

        # Initialise the base constructor
        super(PyscrappSpider, self).__init__(*a, **kw)


    def parse_start_url(self, response):
        if (response.request.meta.has_key("redirect_urls")):
            original_url = response.request.meta["redirect_urls"][0]
            if ((not self.in_redirect.has_key(original_url)) or (not self.in_redirect[original_url])):
                self.in_redirect[original_url] = True
                self.allowed_domains.append(original_url)
        return self.parse_data(response)

    def parse_data(self, response):

        """This function extracts data from the page."""

        self.warcHandler.write_response(response)

        pattern = self.pattern

        # Check if we are interested in the current page
        if (not response.request.headers.get('Referer') 
            or re.search(pattern, self.ensure_not_null(response.meta.get('link_text')), re.IGNORECASE) 
            or re.search(r"/(" + pattern + r")", self.ensure_not_null(response.url), re.IGNORECASE)):

            logging.debug("This page gets processed = %(url)s", {'url': response.url})

            sel = Selector(response)

            item = PyscrappItem()
            item['url'] = response.url


            return item
        else:

            logging.warning("This page does NOT get processed = %(url)s", {'url': response.url})
            return response.request

Remove or expand appropriately your "allowed_domains" variable and you should be fine. 适当删除或扩展您的“ allowed_domains”变量,就可以了。 All the URLs the spider follows, by default, are restricted by allowed_domains. 默认情况下,爬虫遵循的所有URL均受allowed_domains限制。

EDIT: This case mentions particularly pdfs. 编辑:此案特别提到了pdf。 PDFs are explicitly excluded as extensions as per the default value of deny_extensions (see here ) which is IGNORED_EXTENSIONS (see here ). PDF文件被明确排除在外的扩展按默认值deny_extensions (见这里 ),这是IGNORED_EXTENSIONS (见这里 )。

To allow your application to crawl PDFs all you have to do is to exclude them from IGNORED_EXTENSIONS by setting explicitly the value for deny_extensions : 要允许您的应用程序对PDF进行爬网,您需要做的就是通过明确设置deny_extensions的值来将它们从IGNORED_EXTENSIONS排除:

from scrapy.linkextractors import IGNORED_EXTENSIONS

self.rules = (Rule(...

LinkExtractor(deny=[r"accessdenied"], deny_extensions=set(IGNORED_EXTENSIONS)-set(['pdf']))

..., callback="parse_data"...

So, I'm afraid, this is the answer to the question "Why does Scrapy miss some links?". 因此,恐怕,这就是“为什么Scrapy会丢失某些链接?”这一问题的答案。 As you will likely see it just opens the doors to further questions, like "how do I handle those PDFs" but I guess this is the subject of another question. 您可能会看到,它只是为进一步的问题打开了大门,例如“我如何处理这些PDF”,但我想这是另一个问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM