简体   繁体   中英

Scrapy spider not crawling the required pages

Here is the website link I am trying to crawl. http://search.epfoservices.in/est_search_display_result.php?pageNum_search=1&totalRows_search=72045&old_rg_id=AP&office_name=&pincode=&estb_code=&estb_name=&paging=paging And below is my scraper ,as this is one of the first attempts to scraping , so pardon for silly mistakes. Kindly have a look and suggest any changes which would make my code running.

Items.py

import scrapy


class EpfoCrawl2Item(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    from scrapy.item import Item, Field
    S_No = Field()
    Old_region_code = Field()
    Region_code = Field()
    Name = Field()
    Address = Field()
    Pin = Field()
    Epfo_office = Field()
    Under_Ro = Field()
    Under_Acc = Field()
    Payment = Field()
    pass

epfocrawl1_spider.py

import scrapy
from scrapy.selector import HtmlXPathSelector


class EpfoCrawlSpider(scrapy.Spider):
"""Spider for regularly updated search.epfoservices.in"""
name = "PfData"
allowed_domains = ["search.epfoservices.in"]
starturls = ["http://search.epfoservices.in/est_search_display_result.php?pageNum_search=1&totalRows_search=72045&old_rg_id=AP&office_name=&pincode=&estb_code=&estb_name=&paging=paging"]

def parse(self,response):
    hxs = HtmlXPathSelector(response)
    rows = hxs.select('//tr"]')
    items = []
    for val in rows:
        item = Val()
        item['S_no'] = val.select('/td[0]/text()').extract()
        item['Old_region_code'] = val.select('/td[1]/text').extract()
        item['Region_code'] = val.select('/td[2]/text()').extract()
        item['Name'] = val.select('/td[3]/text()').extract()
        item['Address'] = val.select('/td[4]/text()').extract()
        item['Pin'] = val.select('/td[5]/text()').extract()
        item['Epfo_office'] = val.select('/td[6]/text()').extract()
        item['Under_ro'] = val.select('/td[7]/text()').extract()
        item['Under_Acc'] = val.select('/td[8]/text()').extract()
        item['Payment'] = val.select('a/@href').extract()
        items.append(item)
        yield items

And below is the log after running "scrapy crawl PfData"

016-05-25 13:45:11+0530 [scrapy] INFO: Enabled item pipelines: 
2016-05-25 13:45:11+0530 [PfData] INFO: Spider opened
2016-05-25 13:45:11+0530 [PfData] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-05-25 13:45:11+0530 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-05-25 13:45:11+0530 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2016-05-25 13:45:11+0530 [PfData] INFO: Closing spider (finished)
2016-05-25 13:45:11+0530 [PfData] INFO: Dumping Scrapy stats:
    {'finish_reason': 'finished',
     'finish_time': datetime.datetime(2016, 5, 25, 8, 15, 11, 343313),
     'log_count/DEBUG': 2,
     'log_count/INFO': 7,
     'start_time': datetime.datetime(2016, 5, 25, 8, 15, 11, 341872)}
2016-05-25 13:45:11+0530 [PfData] INFO: Spider closed (finished)

Suggestions are requested.

起始 url 列表必须是start_urls而不是starturls

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM