简体   繁体   English

ModuleNotFoundError:没有名为“ scrapy_user_agents”的模块

[英]ModuleNotFoundError: No module named 'scrapy_user_agents'

I tried to use scrapy_user_agents with scrapy-proxy-pool. 我试图将scrapy_user_agents与scrapy-proxy-pool一起使用。

I added these lines in my settings.py: 我在settings.py中添加了以下几行:

    DOWNLOADER_MIDDLEWARES = {
    'scrapy_proxy_pool.middlewares.ProxyPoolMiddleware': 610,
    'scrapy_proxy_pool.middlewares.BanDetectionMiddleware': 620,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    'scrapy_user_agents.middlewares.RandomUserAgentMiddleware': 700,
    }

when I run my spider, I get this error message: 当我运行蜘蛛时,出现以下错误消息:

ModuleNotFoundError: No module named 'scrapy_user_agents' ModuleNotFoundError:没有名为“ scrapy_user_agents”的模块

I removed the lines of proxy in the middleware, but I get same issue, same error message. 我删除了中间件中的代理行,但是出现相同的问题,同样的错误消息。

You will find below the complete log errors: 您将在下面找到完整的日志错误:

2019-08-13 16:05:28 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: scraping_entreprises) 2019-08-13 16:05:28 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c 28 May 2019), cryptography 2.7, Platform Windows-10-10.0.17134-SP0 2019-08-13 16:05:28 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'scraping_entreprises', 'NEWSPIDER_MODULE': 'scraping_entreprises.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MOD ULES': ['scraping_entreprises.spiders']} 2019-08-13 16:05:28 [scrapy.extensions.telnet] INFO: Telnet Password: 0a7932c1a3ce188f 2019-08-13 16:05:28 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] Unhandled error in Deferred: 2019-08-13 16:05:29 [twisted] CRITICA 2019-08-13 16:05:28 [scrapy.utils.log]信息:Scrapy 1.7.3已启动(机器人:scraping_entreprises)2019-08-13 16:05:28 [scrapy.utils.log]信息:版本: lxml 4.4.1.0,libxml2 2.9.5,cssselect 1.1.0,parsel 1.5.2,w3lib 1.21.0,Twisted 19.7.0,Python 3.7.4(tags / v3.7.4:e09359112e,2019年7月8日,20:34 :20)[MSC v.1916 64位(AMD64)],pyOpenSSL 19.0.0(OpenSSL 1.1.1c 2019年5月28日),加密2.7,平台Windows-10-10.0.17134-SP0 2019-08-13 16:05 :28 [scrapy.crawler]信息:覆盖的设置:{'BOT_NAME':'scraping_entreprises','NEWSPIDER_MODULE':'scraping_entreprises.spiders','ROBOTSTXT_OBEY':True,'SPIDER_MOD ULES':['scraping_entreprises。 2019-08-13 16:05:28 [scrapy.extensions.telnet]信息:Telnet密码:0a7932c1a3ce188f 2019-08-13 16:05:28 [scrapy.middleware]信息:启用的扩展名:['scrapy.extensions.corestats .CoreStats','scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.logstats.LogStats']延迟中未处理的错误:2019-08-13 16:05:29 [扭曲] CRITICA L: Unhandled error in Deferred: Traceback (most recent call last): File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\crawler.py", line 184, in crawl return self._crawl(crawler, *args, **kwargs) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\crawler.py", line 188, in _crawl d = crawler.crawl(*args, **kwargs) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\twisted\\internet\\defer.py", line 1613, in unwindGenerator return _cancellableInlineCallbacks(gen) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\twisted\\internet\\defer.py", line 1529, in _cancellableInlineCallbacks _inlineCallbacks(None, g, status)--- --- File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\twisted\\internet\\defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\crawler.py", line 86, in crawl self.engine = self._create_engine() L:延期中未处理的错误:追溯(最近一次调用):文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ crawler.py”,行184,在爬网返回自身中._crawl(crawler,* args,** kwargs)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ crawler.py”,第188行,位于_crawl d = crawler.crawl (* args,** kwargs)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ twisted \\ internet \\ defer.py”,行1613,在unwindGenerator中返回_cancellableInlineCallbacks(gen)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ twisted \\ internet \\ defer.py“,第1529行,位于_cancellableInlineCallbacks _inlineCallbacks(无,g,状态)--- ---文件” c :_users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ twisted \\ internet \\ defer.py“,第1418行,位于_inlineCallbacks结果= g.send(结果)文件” c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ crawler.py“,第86行,在抓取中self.engine = self._create_engine() File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\crawler.py", line 111, in create_engine return ExecutionEngine(self, lambda : self.stop()) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\core\\engine.py", line 69, in init self.downloader = downloader_cls(crawler) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\core\\downloader_init.py", line 86, in init self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\middleware.py", line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\utils\\misc.py", line 46, in load_object mod = import_module(module) File "C:\\Users\\Nino\\AppData\\Local\\Progra 在create_engine中,文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ crawler.py”第111行返回ExecutionEngine(self,lambda:self.stop())文件“ c: \\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ core \\ engine.py“,第69行,位于初始化self.downloader = downloader_cls(crawler)文件” c:\\ users \\ nino \\ pycharmprojects \\ init self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib”中的scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ core \\ downloader_init.py”,第86行\\ site-packages \\ scrapy \\ middleware.py“,from_crawler中的第53行,返回cls.from_settings(crawler.settings,crawler)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ middleware.py“,第34行,位于from_settings mwcls = load_object(clspath)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ utils \\ misc.py”中,第46行,在load_object mod = import_module(module)文件中“ C:\\ Users \\ Nino \\ AppData \\ Local \\ Progra ms\\Python\\Python37\\lib\\importlib_init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 953, in _find_and_load_unlocked File "", line 219, in _call_with_frames_removed File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 965, in _find_and_load_unlocked builtins.ModuleNotFoundError: No module named 'scrapy_user_agents' 2019-08-13 16:05:29 [twisted] CRITICAL: Traceback (most recent call last): File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\twisted\\internet\\defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\crawler.py", line 86, in crawl self.engine = self._create_engine() File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\crawler.py", line 111, in create_engine return ExecutionEngine(self, lambda ms \\ Python \\ Python37 \\ lib \\ importlib_init.py“,第127行,在import_module中,返回_bootstrap._gcd_import(name [level:],包,级别)文件”“,第1006行,在_gcd_import文件”“,第983行,在_find_and_load文件“”的第953行,在_find_and_load_unlocked文件“”的第219行,在_call_with_frames_removed文件“”的第1006行,在_gcd_import文件“”的第983行,在_find_and_load_load文件“”的第965行,在_find_Not_locked_insed_Not_locked没有名为'scrapy_user_agents'的模块2019-08-13 16:05:29 [扭曲]严重:追溯(最近一次调用):文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ _inlineCallbacks结果= g.send(结果)中的“ twisted \\ internet \\ defer.py”行1418,文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ crawler.py”,第86行,在抓取self.engine = self._create_engine()中,行111,在create_engine中的文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ crawler.py”中返回ExecutionEngine(自我,lambda : self.stop()) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\core\\engine.py", line 69, in init self.downloader = downloader_cls(crawler) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\core\\downloader_init.py", line 86, in init self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\middleware.py", line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "c:\\users\\nino\\pycharmprojects\\scraping\\venv\\lib\\site-packages\\scrapy\\utils\\misc.py", line 46, in load_object mod = import_module(module) File "C:\\Users\\Nino\\AppData\\Local\\Programs\\Python\\Python37\\lib\\importlib_init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, i :self.stop())文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ core \\ engine.py”,第69行,位于初始self.downloader = downloader_cls(crawler)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ core \\ downloader_init.py”,init self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ middleware.py”,from_crawler中的第53行,返回cls.from_settings(crawler.settings,crawler)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ middleware.py“,第34行,位于from_settings mwcls = load_object(clspath)文件“ c:\\ users \\ nino \\ pycharmprojects \\ scraping \\ venv \\ lib \\ site-packages \\ scrapy \\ utils \\ misc.py”,第46行,在load_object mod = import_module(module)文件“ C:\\ Users \\ Nino \\ AppData \\ Local \\ Programs \\ Python \\ Python37 \\ lib \\ importlib_init.py”,第127行,在import_module返回_bootstrap._gcd_import(name [level:],程序包,级别)文件“”,第1006行,i n _gcd_import File "", line 983, in _find_and_load File "", line 953, in _find_and_load_unlocked File "", line 219, in _call_with_frames_removed File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 965, in _find_and_load_unlocked ModuleNotFoundError: No module named 'scrapy_user_agents' 在“ _find_and_load文件”中的_gcd_import文件“”的第983行,在_find_and_load_unlocked文件“”中的行953的第219行,在_call_with_frames_removed文件“”的第1006行,在_gcd_import文件“”的行983的_find_and中_find_and_load_unlocked ModuleNotFoundError中的第965行:没有名为“ scrapy_user_agents”的模块

Try uninstalling and installing the module again to make sure its installed for your version of python. 尝试再次卸载并安装该模块,以确保为您的python版本安装了该模块。

pip (un)install ModuleName pip(un)安装ModuleName

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Scrapy ModuleNotFoundError:没有名为“import”的模块 - Scrapy ModuleNotFoundError: No module named "import" Scrapy ModuleNotFoundError:没有名为“MySQLdb”的模块 - Scrapy ModuleNotFoundError: No module named 'MySQLdb' scrapy import itemloaders ModuleNotFoundError:没有名为“itemloaders”的模块 - scrapy import itemloaders ModuleNotFoundError: No module named 'itemloaders' ModuleNotFoundError:没有名为“scrapy”的模块(PyCharm 中发生错误) - ModuleNotFoundError: No module named 'scrapy' (Error happend in PyCharm) ModuleNotFoundError:MAC OSX没有名为“ scrapy”的模块 - ModuleNotFoundError: No module named 'scrapy' for MAC OSX Scrapy 抓取返回 ModuleNotFoundError: No module named '_lzma' - Scrapy crawl return ModuleNotFoundError: No module named '_lzma' scrapy 爬网:ModuleNotFoundError:没有名为“的模块”<project_name> '</project_name> - scrapy crawl: ModuleNotFoundError: No module named '<project_name>' scrapy 未运行 ModuleNotFoundError:没有名为“scraper.settings”的模块 - scrapy not running ModuleNotFoundError: No module named 'scraper.settings' ModuleNotFoundError:没有名为“user_accounts”的模块 - ModuleNotFoundError: No module named 'user_accounts' ModuleNotFoundError:没有名为“project.user”的模块 - ModuleNotFoundError: No module named 'project.user'
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM