[英]Can't propel the generated links for next page to crawl recursively
我创建的搜寻器正在从网页中获取名称和网址。 现在,我完全不知道让我的搜寻器使用next_page生成的链接来从下一页获取数据。 对于使用类创建搜寻器,我是一个新手,这是因为我无法进一步思考。 我已经采取主动对代码稍作改动,但是它既不会带来任何结果,也不会引发任何错误。 希望有人来看看它。
import requests
from lxml import html
class wiseowl:
def __init__(self,start_url):
self.start_url=start_url
self.storage=[]
def crawl(self):
self.get_link(self.start_url)
def get_link(self,link):
url="http://www.wiseowl.co.uk"
response=requests.get(link)
tree=html.fromstring(response.text)
name=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/text()")
urls=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/@href")
docs=(name,urls)
self.storage.append(docs)
next_page=tree.xpath("//div[contains(concat(' ', @class, ' '), ' woPaging ')]//a[@class='woPagingItem']/@href")
for npage in next_page:
if npage is not None:
self.get_link(url+npage)
def __str__(self):
return "{}".format(self.storage)
crawler=wiseowl("http://www.wiseowl.co.uk/videos/")
crawler.crawl()
for item in crawler.storage:
print(item)
我修改了课程的某些部分,尝试一下:
class wiseowl:
def __init__(self,start_url):
self.start_url=start_url
self.links = [ self.start_url ] # a list of links to crawl #
self.storage=[]
def crawl(self):
for link in self.links : # call get_link for every link in self.links #
self.get_link(link)
def get_link(self,link):
print('Crawling: ' + link)
url="http://www.wiseowl.co.uk"
response=requests.get(link)
tree=html.fromstring(response.text)
name=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/text()")
urls=tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']/a/@href")
docs=(name,urls)
#docs=(name, [url+u for u in urls]) # use this line if you want to join the urls #
self.storage.append(docs)
next_page=tree.xpath("//div[contains(concat(' ', @class, ' '), ' woPaging ')]//*[@class='woPagingItem' or @class='woPagingNext']/@href") # get links form 'woPagingItem' or 'woPagingNext' #
for npage in next_page:
if npage and url+npage not in self.links : # don't get the same link twice #
self.links += [ url+npage ]
def __str__(self):
return "{}".format(self.storage)
crawler=wiseowl("http://www.wiseowl.co.uk/videos/")
crawler.crawl()
for item in crawler.storage:
item = zip(item[0], item[1])
for i in item :
print('{:60} {}'.format(i[0], i[1])) # you can change 60 to the value you want #
您应该考虑利用某种数据结构来保存访问过的链接(以避免无限循环)以及尚未访问的链接的容器。 从本质上讲,爬网是互联网的广度优先搜索。 因此,您应该使用Google广度优先搜索来更好地了解底层算法。
您的搜寻器方法应类似于:
def crawler(self): while len(self.queue): curr_link = self.queue.pop(0) # process curr_link here -> scrape and add more links to queue # mark curr_link as visited
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.