简体   繁体   English

包括 Google Cloud Datastore Python3 AttributeError

[英]Google Cloud Datastore Python3 AttributeError on include

I wrote a python3 scrapy spider that I have run to completion many times.我写了一个 python3 scrapy spider,我已经多次运行完成。 Now I want to add google cloud datastore functionality to it (read/write from the datastore).现在我想向它添加谷歌云数据存储功能(从数据存储读取/写入)。 I went to follow the instructions on the google docs page for this.为此,我按照谷歌文档页面上的说明进行操作。 However, when I go to include the library in my script, without even calling a single function, (just including it causes this), I get the following error:但是,当我在脚本中包含库时,甚至没有调用单个函数(仅包含它会导致此问题),我收到以下错误:

AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key' AttributeError:模块“google.protobuf.descriptor”没有属性“_internal_create_key”

I did a search on Stackoverflow and online, and the suggested response was to upgrade the install of the google protobuf package (as seen here:)我在 Stackoverflow 和在线上进行了搜索,建议的响应是升级 google protobuf 包的安装(如下所示:)

How to solve "AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key"? 如何解决“AttributeError:模块'google.protobuf.descriptor'没有属性'_internal_create_key”?

I tried the steps outlined there to no success.我尝试了那里列出的步骤,但没有成功。 I still get the same error.我仍然遇到同样的错误。 Keep in mind I used the python3 version of pip (pip3) to attempt my protobuf upgrade.请记住,我使用 python3 版本的 pip (pip3) 来尝试我的 protobuf 升级。 It says it's already up to date, and says I'm using version 3.13.0.它说它已经是最新的,并说我使用的是 3.13.0 版。 The version of python being loaded is 3.8.2.正在加载的python版本是3.8.2。 However, somebody else listed protoc --version as a command to run, but when I run that protoc is not installed and the command is not found.但是,其他人将protoc --version列为要运行的命令,但是当我运行时,未安装该 protoc 并且找不到该命令。 Keep in mind there are multiple comments regarding the correct and working version of protobuf to be 3.13.0, which pip3 already says that I have.请记住,有很多关于 protobuf 的正确和工作版本为 3.13.0 的评论,pip3 已经说我有。 I've attached a screenshot of the callstack in case it means something to somebody.我附上了调用堆栈的屏幕截图,以防它对某人有意义。

——

Edit: One thing I just tried is I ran the python3 interactive shell, and typed "from google.cloud import datastore", and it worked without an error.编辑:我刚刚尝试的一件事是我运行了 python3 交互式 shell,并输入了“from google.cloud import datastore”,并且它没有错误地工作。 Is this possibly a clue about where the issue lies?这可能是问题所在的线索吗? Also when i issue the command pip3 install --upgrade google-cloud-datastore , it says all the requirements are already satisfied, including google protobuf >= 3.12.0, which is correct as its 3.13.0.此外,当我发出命令pip3 install --upgrade google-cloud-datastore ,它表示已经满足所有要求,包括 google protobuf >= 3.12.0,这是正确的 3.13.0。 The issue still occurs however.然而问题依然存在。

Edit2: I also created a windows 10 vm, and the include works fine there, also with python3/anaconda. Edit2:我还创建了一个 windows 10 vm,并且包含在那里工作正常,也使用 python3/anaconda。 I may end up just doing my development within the vm if this is the case.如果是这种情况,我可能最终只会在 vm 中进行开发。 It definitely seems like theres some kind of old protobuf file or dependency that is stale/wrong on my ubuntu machine.在我的 ubuntu 机器上,肯定有某种旧的 protobuf 文件或依赖项已经过时/错误。

This problem magically resolved itself days later.几天后,这个问题神奇地自行解决了。 As a previous commenter stated, it seemed like a caching issue, and the cache was cleaned up, and now the scrapy webcrawler works just fine.正如之前的评论者所说,这似乎是一个缓存问题,缓存已被清理,现在scrapy webcrawler 工作得很好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM