[英]scrapy: Remove some elements from an xpath selector
我正在使用Scrapy来抓取具有某些奇怪格式约定的网站。 基本思想是,我希望某个div的所有文本和子元素,中间不包括几个div。 这是下面的代码:-
<div align="center" class="article"><!--wanted-->
<img src="http://i.imgur.com/12345.jpg" width="500" alt="abcde" title="abcde"><br><br>
<div style="text-align:justify"><!--wanted-->
Sample Text<br><br>Demo: <a href="http://www.example.com/?http://example.com/item/asash/asdas-asfasf-afaf.html" target="_blank">http://example.com/dfa/asfa/aasfa</a><br><br>
<div class="quote"><!--wanted-->
http://www.coolfiles.ro/download/kleo13.rar/1098750<br>http://www.ainecreator.com/files/0MKOGM6D/kleo13.rar_links<br>
</div>
<br>
<div align="left"><!--not wanted-->
<div id="ratig-layer-2249"><!--not wanted-->
<div class="rating"><!--not wanted-->
<ul class="unit-rating">
<li class="current-rating" style="width:80%;">80</li>
<li><a href="#" title="Bad" class="r1-unit" onclick="doRate('1', '2249'); return false;">1</a></li>
<li><a href="#" title="Poor" class="r2-unit" onclick="doRate('2', '2249'); return false;">2</a></li>
<li><a href="#" title="Fair" class="r3-unit" onclick="doRate('3', '2249'); return false;">3</a></li>
<li><a href="#" title="Good" class="r4-unit" onclick="doRate('4', '2249'); return false;">4</a></li>
<li><a href="#" title="Excellent" class="r5-unit" onclick="doRate('5', '2249'); return false;">5</a></li>
</ul>
</div>
(votes: <span id="vote-num-id-2249">3</span>)
</div>
</div>
<div class="reln"><!--not wanted-->
<strong>
<h4>Related News:</h4>
</strong>
<li><a href="http://www.example.com/themes/tf/a-b-c-d.html">1</a></li>
<li><a href="http://www.example.com/plugins/codecanyon/a-b-c-d">2</a></li>
<li><a href="http://www.example.com/themes/tf/a-b-c-d.html">3</a></li>
<li><a href="http://www.example.com/plugins/codecanyon/a-b-c-d.html">4</a></li>
<li><a href="http://www.example.com/plugins/codecanyon/a-b-c-d.html">5</a></li>
</div>
</div>
</div>
最终输出应如下所示:
<div align="center" class="article"><!--wanted-->
<img src="http://i.imgur.com/12345.jpg" width="500" alt="abcde" title="abcde"><br><br>
<div style="text-align:justify"><!--wanted-->
Sample Text<br><br>Demo: <a href="http://www.example.com/?http://example.com/item/asash/asdas-asfasf-afaf.html" target="_blank">http://example.com/dfa/asfa/aasfa</a><br><br>
<div class="quote"><!--wanted-->
http://www.coolfiles.ro/download/kleo13.rar/1098750<br>http://www.ainecreator.com/files/0MKOGM6D/kleo13.rar_links<br>
</div>
<br>
</div>
</div>
这是我的Scrapy代码的一部分。 请建议添加此脚本:-
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from isbullshit.items import IsBullshitItem
class IsBullshitSpider(CrawlSpider):
""" General configuration of the Crawl Spider """
name = 'isbullshitwp'
start_urls = ['http://example.com/themes'] # urls from which the spider will start crawling
rules = [Rule(SgmlLinkExtractor(allow=[r'page/\d+']), follow=True),
# r'page/\d+' : regular expression for http://example.com/page/X URLs
Rule(SgmlLinkExtractor(allow=[r'\w+']), callback='parse_blogpost')]
# r'\d{4}/\d{2}/\w+' : regular expression for http://example.com/YYYY/MM/title URLs
def parse_blogpost(self, response):
hxs = HtmlXPathSelector(response)
item = IsBullshitItem()
item['title'] = hxs.select('//span[@class="storytitle"]/text()').extract()[0]
item['article_html'] = hxs.select("//div[@class='article']").extract()[0]
return item
这是我尝试过但未获得所需结果的以下xpath:-
item['article_html'] = hxs.select("//div[@class='article']").extract()[0]
item['article_html'] = hxs.select("//div[@class='article']/following::node() [not(preceding::div[@class='reln']) and not(@class='reln')]").extract()[0]
item['article_html'] = hxs.select("//div[@class='article']/div[@class='reln']/preceding-sibling::node()[preceding-sibling::div[@class='quote']]").extract()[0]
item['article_html'] = hxs.select("//div[@class='article']/following::node() [not(preceding::div[@class='reln'])]").extract()[0]
item['article_html'] = hxs.select("//div[@class='article']/div[@class='quote']/*[not(self::div[@class='reln'])]").extract()[0]
item['article_html'] = hxs.select("//div[@class='article']/*[(self::name()='reln'])]").extract()[0]
提前致谢...
使用Scrapy,看来您无法做到这一点。 我有自己的功能来删除特定节点(及其子节点):
def removeNode(context, nodeToRemove):
for element in nodeToRemove:
contentToRemove = context.css(element)
if contentToRemove:
contentToRemove = contentToRemove[0].root
contentToRemove.getparent().remove(contentToRemove)
return context.extract()
希望对你有帮助
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.