简体   繁体   中英

Can't propel the generated links for next page to crawl recursively

The crawler I have created are fetching names and urls from a webpage. Now, I can't get any idea to make my crawler use the links generated by the next_page to fetch data from next page. I'm very new to creating crawler using class that is because I can't move further with my thinking. I've already taken an initiative to do a slight twist in my code but it neither brings any result nor throws any error. Hope somebody will take a look into it.

import requests
from lxml import html

class wiseowl:
    def __init__(self,start_url):
        self.start_url=start_url
        self.storage=[]

    def crawl(self):
        self.get_link(self.start_url)

    def get_link(self,link):
        url="http://www.wiseowl.co.uk"
        response=requests.get(link)
        tree=html.fromstring(response.text)
        name=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/text()")
        urls=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/@href")
        docs=(name,urls)
        self.storage.append(docs)

        next_page=tree.xpath("//div[contains(concat(' ', @class, ' '), ' woPaging ')]//a[@class='woPagingItem']/@href")
        for npage in next_page:
            if npage is not None:
                self.get_link(url+npage)


    def __str__(self):
        return "{}".format(self.storage)


crawler=wiseowl("http://www.wiseowl.co.uk/videos/")
crawler.crawl()
for item in crawler.storage:
    print(item)

I modified some parts of your class, give it a try :

class wiseowl:
    def __init__(self,start_url):
        self.start_url=start_url
        self.links = [ self.start_url ]    #  a list of links to crawl # 
        self.storage=[]

    def crawl(self): 
        for link in self.links :    # call get_link for every link in self.links #
            self.get_link(link)

    def get_link(self,link):
        print('Crawling: ' + link)
        url="http://www.wiseowl.co.uk"
        response=requests.get(link)
        tree=html.fromstring(response.text)
        name=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/text()")
        urls=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/@href")
        docs=(name,urls)
        #docs=(name, [url+u for u in urls])    # use this line if you want to join the urls # 
        self.storage.append(docs)
        next_page=tree.xpath("//div[contains(concat(' ', @class, ' '), ' woPaging ')]//*[@class='woPagingItem' or @class='woPagingNext']/@href")    # get links form 'woPagingItem' or 'woPagingNext' # 
        for npage in next_page:
            if npage and url+npage not in self.links :    # don't get the same link twice # 
                self.links += [ url+npage ]

    def __str__(self):
        return "{}".format(self.storage)

crawler=wiseowl("http://www.wiseowl.co.uk/videos/")
crawler.crawl()
for item in crawler.storage:
    item = zip(item[0], item[1])
    for i in item : 
        print('{:60} {}'.format(i[0], i[1]))    # you can change 60 to the value you want # 

You should think about utilizing some type of data structure to hold both visited links (to avoid infinite loops) as well as a container for links you have yet to visit. Crawling is essentially a breadth first search of the internet. So, you should google breadth first search to gain a better understanding of the underlying algorithm.

  1. Implement a queue for links you need to visit. Every time you visit a link, scrape the page for all links and enqueue each one.
  2. Implement a set in Python, or a dictionary, to check whether each link you enqueue has already been visited, if it has been visited, do not enqueue it.
  3. Your crawler method should be something like:

     def crawler(self): while len(self.queue): curr_link = self.queue.pop(0) # process curr_link here -> scrape and add more links to queue # mark curr_link as visited 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM