簡體   English   中英

Scrapy:如何將url_id與已爬網數據一起存儲

[英]Scrapy: how to store url_id along with the crawled data

from scrapy import Spider, Request
from selenium import webdriver

class MySpider(Spider):
    name = "my_spider"

    def __init__(self):
        self.browser = webdriver.Chrome(executable_path='E:/chromedriver')
        self.browser.set_page_load_timeout(100)


    def closed(self,spider):
        print("spider closed")
        self.browser.close()

    def start_requests(self):
        start_urls = []
        with open("target_urls.txt", 'r', encoding='utf-8') as f:
            for line in f:
                url_id, url = line.split('\t\t')
                start_urls.append(url)

        for url in start_urls:
            yield Request(url=url, callback=self.parse)

    def parse(self, response):
        yield {
            'target_url': response.url,
            'comments': response.xpath('//div[@class="comments"]//em//text()').extract()
        }

以上是我的scrapy代碼。 我使用scrapy crawl my_spider -o comments.json來運行爬蟲。

您可能會注意到,對於我的每個url ,都有一個url_id關聯的唯一url_id 如何將每個已爬網結果與url_id 理想情況下,我想存儲url_id在產量輸出結果comments.json

非常感謝!

例如,嘗試傳入meta參數。 我已對您的代碼進行了一些更新:

def start_requests(self):
    with open("target_urls.txt", 'r', encoding='utf-8') as f:
        for line in f:
            url_id, url = line.split('\t\t')
            yield Request(url, self.parse, meta={'url_id': url_id, 'original_url': url})

def parse(self, response):
    yield {
        'target_url': response.meta['original_url'],
        'url_id': response.meta['url_id'],
        'comments': response.xpath('//div[@class="comments"]//em//text()').extract()
    }

回答問題和評論,嘗試這樣的事情:

from scrapy import Spider, Request
from selenium import webdriver

class MySpider(Spider):
    name = "my_spider"

    def __init__(self):
        self.browser = webdriver.Chrome(executable_path='E:/chromedriver')
        self.browser.set_page_load_timeout(100)


    def closed(self,spider):
        print("spider closed")
        self.browser.close()

    def start_requests(self):

        with open("target_urls.txt", 'r', encoding='utf-8') as f:
            for line in f:
                url_id, url = line.split('\t\t')

                yield Request(url=url, callback=self.parse, meta={'url_id':url_id,'url':url})


    def parse(self, response):
        yield {
        'target_url': response.meta['url'],
        'comments': response.xpath('//div[@class="comments"]//em//text()').extract(),
        'url_id':response.meta['url_id']
    }

如前面的答案所述,您可以使用META( http://scrapingauthority.com/scrapy-meta )在各種方法之間傳遞參數。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM