I have made a scrapy crawler that goes to this site https://www.cartoon3rbi.net/cats.html
then by first rule open the link to every show, get its title by parse_title method, and on third rule open every episode's link and get its name. its working fine, i just need to know how can i make a seperate csv file for each show's episodes's names with titles in parse_title method being used as name of the csv file. Any suggestions?
# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class FfySpider(CrawlSpider):
custom_settings = {
'CONCURRENT_REQUESTS': 1
}
name = 'FFy'
allowed_domains = ['cartoon3rbi.net']
start_urls = ['https://www.cartoon3rbi.net/cats.html']
rules = (
Rule(LinkExtractor(restrict_xpaths='//div[@class="pagination"]/a[last()]'), follow=True),
Rule(LinkExtractor(restrict_xpaths='//div[@class="cartoon_cat"]'), callback='title_parse', follow=True),
Rule(LinkExtractor(restrict_xpaths='//div[@class="cartoon_eps_name"]'), callback='parse_item', follow=True),
)
def title_parse(self, response):
title = response.xpath('//div[@class="sidebar_title"][1]/text()').extract()
def parse_item(self, response):
for el in response.xpath('//div[@id="topme"]'):
yield {
'name': el.xpath('//div[@class="block_title"]/text()').extract_first()
}
Assuming you have the titles stored in a list titles
and the respective contents stored in a list contents
, you could call the following custom function write_to_csv(title, content)
each time to write the content to a file and save it by the name <title>.csv
.
def write_to_csv(title, content=''):
# if no content is provided,
# it creates an empty csv file.
with open(title+'.csv', 'w') as f:
f.write(content)
for content, title in zip(contents, titles):
write_to_csv(title, content)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.