[英]Google Python cloud-dataflow instances broke without new deployment (failed pubsub import)
I have defined a few different Cloud Dataflow jobs for Python in the Google AppEngine Flex Environment. 我在Google AppEngine Flex环境中为Python定义了一些不同的Cloud Dataflow作业。 I have defined my requirements in a requirements.txt file, included my setup.py file, and everything was working just fine.
我已经在require.txt文件中定义了我的需求,包括了setup.py文件,并且一切正常。 My last deployment was on May 3rd, 2018. Looking through logs, I see that one of my jobs began failing on May 22nd, 2018. The job fails with a stack trace resulting from a bad import, seen below.
我的上一次部署是在2018年5月3日。通过日志查看,我发现我的一个作业在2018年5月22日开始失败。该作业失败,并由于错误的导入而导致了堆栈跟踪,如下所示。
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 582, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 166, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 294, in apache_beam.runners.worker.operations.DoOperation.start (apache_beam/runners/worker/operations.c:10607)
def start(self):
File "apache_beam/runners/worker/operations.py", line 295, in apache_beam.runners.worker.operations.DoOperation.start (apache_beam/runners/worker/operations.c:10501)
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 300, in apache_beam.runners.worker.operations.DoOperation.start (apache_beam/runners/worker/operations.c:9702)
pickler.loads(self.spec.serialized_fn))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 225, in loads
return dill.loads(s)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in loads
return load(file)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
klass = self.find_class(module, name)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in find_class
return StockUnpickler.find_class(self, module, name)
File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
__import__(module)
File "/usr/local/lib/python2.7/dist-packages/dataflow_pipeline/tally_overages.py", line 27, in <module>
from google.cloud import pubsub
File "/usr/local/lib/python2.7/dist-packages/google/cloud/pubsub.py", line 17, in <module>
from google.cloud.pubsub_v1 import PublisherClient
File "/usr/local/lib/python2.7/dist-packages/google/cloud/pubsub_v1/__init__.py", line 17, in <module>
from google.cloud.pubsub_v1 import types
File "/usr/local/lib/python2.7/dist-packages/google/cloud/pubsub_v1/types.py", line 26, in <module>
from google.iam.v1.logging import audit_data_pb2
ImportError: No module named logging
So the main issue seems to come from the pubsub dependency relying on importing google.iam.v1.logging
, which is installed from grpc-google-iam-v1
. 因此,主要问题似乎来自依赖于导入
google.iam.v1.logging
的pubsub依赖关系,该依赖项是从grpc-google-iam-v1
。
Here is my requirements.txt file 这是我的requirements.txt文件
Flask==0.12.2
apache-beam[gcp]==2.1.1
gunicorn==19.7.1
google-cloud-dataflow==2.1.1
google-cloud-datastore==1.3.0
pytz
google-cloud-pubsub
google-gax
grpc-google-iam-v1
googleapis-common-protos
google-cloud==0.32
six==1.10.0
protobuf
I am able to run everything locally just fine by doing the following from my project. 通过在项目中执行以下操作,我可以在本地运行所有程序。
$ virtualenv --no-site-packages .
$ . bin/activate
$ pip install --ignore-installed -r requirements.txt
$ python main.py
No handlers could be found for logger "oauth2client.contrib.multistore_file"
INFO:werkzeug: * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
INFO:werkzeug: * Restarting with stat
No handlers could be found for logger "oauth2client.contrib.multistore_file"
WARNING:werkzeug: * Debugger is active!
INFO:werkzeug: * Debugger PIN: 317-820-645
specifically, I am able to do the following locally just fine 具体来说,我可以在本地执行以下操作
$ python
>>> from google.cloud import pubsub
>>> import google.iam.v1.logging
>>> google.iam.v1.logging.__file__
'/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/iam/v1/logging/__init__.pyc'
So I know that the installation of the grpc-google-iam-v1
package is working just fine locally.. the required files are there. 因此,我知道
grpc-google-iam-v1
软件包的安装在本地运行得很好。.所需文件在那里。
My questions are 我的问题是
grpc-google-iam-v1
on the Google AppEngine Flex Environment not installing all of the files correctly? grpc-google-iam-v1
无法正确安装所有文件? I must be missing the /site-packages/google/iam/v1/logging
directory. /site-packages/google/iam/v1/logging
目录。 I was able to get the pipeline running again after changing the requirements.txt file to 将Requirements.txt文件更改为后,我能够使管道再次运行
Flask==0.12.2
apache-beam[gcp]
google-cloud-dataflow
gunicorn==19.7.1
google-cloud-datastore==1.3.0
pytz
google-cloud-pubsub
google-gax
grpc-google-iam-v1
googleapis-common-protos
google-cloud==0.32
six==1.10.0
protobuf
so simply removing the version requirements from apache-beam[gcp]
and google-cloud-dataflow
did the trick. 因此,只需从
apache-beam[gcp]
和google-cloud-dataflow
删除版本要求就可以了。
Building on the solution provided by John Allard , removing the version from the requirements.txt will automatically default to the latest version. 在John Allard提供的解决方案的基础上,从requirements.txt中删除版本将自动默认为最新版本。 Thus, with no version specified for
apache-beam[gcp]
, google-cloud-dataflow
and google-cloud-pubsub
they will all run on the latest version and solve the dependency issue. 因此,在没有为
apache-beam[gcp]
, google-cloud-dataflow
和google-cloud-pubsub
指定版本的情况下,它们都将在最新版本上运行并解决依赖性问题。 The requirements.txt will look like the following: requirements.txt如下所示:
Flask==0.12.2
apache-beam[gcp]
gunicorn==19.7.1
google-cloud-dataflow
google-cloud-datastore==1.3.0
pytz
google-cloud-pubsub
google-gax
grpc-google-iam-v1
googleapis-common-protos
google-cloud==0.32
six==1.10.0
protobuf
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.