简体   繁体   English

使用文件中的关键字在搜索引擎上使用Scrapy

[英]Using Scrapy on search engines using keywords in a file

Trying to use Scrapy to get a list of sites from search engines based on keywords I have in a file. 尝试使用Scrapy根据文件中包含的关键字从搜索引擎获取网站列表。

Here is the error output from Scrapy: 这是Scrapy的错误输出:

Redirecting (301) to <GET https://duckduckgo.com/?q=> from <GET https://www.duckduckgo.com/?q=>
2014-07-18 16:23:39-0500 [wnd] DEBUG: Crawled (200) <GET https://duckduckgo.com/?q=> (referer: None)

Here is the code: 这是代码:

import re
import os
import sys
import json

from scrapy.spider import Spider
from scrapy.selector import Selector

searchstrings = "wnd.config"
searchoutcome = "searchResults.json"


class wndSpider(Spider):
    name = "wnd"
    allowed_domains = ['google.com']
    url_prefix = []
    #start_urls = ['https://www.google.com/search?q=']
    start_urls = ['https://www.duckduckgo.com/?q=']
    for line in open(searchstrings, 'r').readlines():
        url_prefix = start_urls[0] + line
        #url = url_prefix[0] + line


        #f = open(searchstrings
        #start_urls = [url_prefix]
        #for f in f.readlines():
        #f.close()


        def parse(self, response):
            sel = Selector(response)
            goog_search_list = sel.xpath('//h3/a/@href').extract()
        #goog_search_list = [re.search('q=(.*&sa',n).group(1) for n in goog_search_list]
        #if re.search('q=(.*)&sa',n)]
        #title = sel.xpath('//title/text()').extract()
        #if  len(title)>0: title = tilstle[0]
        #contents = sel.xpath('/html/head/meta[@name="description"]    /@content').extract()
        #if len(contents)>0: contents = contents[0]         

      ## dump output
        #with open(searchoutcome,  "w") as outfile:
           #json.dump(searchoutcome ,outfile, indent=4)

You need to append url to start_urls in the forloop. 您需要在forloop中将url附加到start_urls上。

start_urls = []
base_url = 'https://www.duckduckgo.com/?q='
for line in open(searchstrings, 'r'):
    url = base + line.strip()
    start_urls.append(url)

If your keywords contains special characters, try urllib.urlencode . 如果您的关键字包含特殊字符,请尝试urllib.urlencode

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM