简体   繁体   English

GAE数据存储区还原因API调用urlfetch.Fetch()花费了太长时间而无法响应,因此被取消

[英]GAE datastore restore stops with The API call urlfetch.Fetch() took too long to respond and was cancelled

I am following this guide https://cloud.google.com/appengine/docs/python/console/datastore-backing-up-restoring#restoring_data_to_another_app on how to backup data in one GAE app and restore it in another. 我正在按照本指南https://cloud.google.com/appengine/docs/python/console/datastore-backing-up-restoring#restoring_data_to_another_app的说明 ,了解如何在一个GAE应用程序中备份数据并在另一个GAE应用程序中还原数据。

But every time I restore the backup on the target application I get the error: 但是每次我在目标应用程序上还原备份时,都会收到错误消息:

The API call urlfetch.Fetch() took too long to respond and was cancelled.

Any ideas what I am doing wrong? 有什么想法我做错了吗?

Your urlfetch.Fetch() is taking too long (greater than 60 seconds) to respsond and so it is timing out. 您的urlfetch.Fetch()花费的时间太长(超过60秒)而无法响应,因此它超时了。 Here is an article about it https://cloud.google.com/appengine/articles/deadlineexceedederrors 这是一篇关于它的文章https://cloud.google.com/appengine/articles/deadlineexceedederrors

One solution is to use task queues. 一种解决方案是使用任务队列。 Task queues have a longer timeout or, more appropriately, let you chop the job up into smaller parts. 任务队列的超时时间更长,或更合适的是,您可以将任务分成更小的部分。 https://cloud.google.com/appengine/docs/python/taskqueue/ https://cloud.google.com/appengine/docs/python/taskqueue/

Here is a simple example of how to do this with "push" task queues. 这是一个简单的示例,说明如何使用“推送”任务队列执行此操作。 I realize going from one datastore model to another might not be the redundancy you are looking for. 我意识到从一种数据存储模型过渡到另一种数据存储模型可能并不是您要寻找的冗余。 You may want to backup the datastore entities to another app entirely or another type of database or cloud service. 您可能希望将数据存储区实体完全备份到另一个应用程序或另一种类型的数据库或云服务。 You also probably have multiple models you are backing up. 您可能还需要备份多个模型。 This is just a simple example of setting up and schedule a "push" task queue using a cron job every 24 hours: 这只是一个简单的示例,它每24小时使用cron作业设置和安排“推送”任务队列:

first you have to add "deferred" to the builtins in your app.yaml: 首先,您必须在app.yaml中的内置函数中添加“ deferred”:

builtins:
- deferred: on

Next you need to create a second datastore model we will call "Backup" just copy paste your old model and rename it Backup - It helps to use an identical version of the same model for backups rather than the same model because you can give them the same the primary and the backup the same key: 接下来,您需要创建第二个数据存储模型,我们将其称为“备份”,只需复制粘贴旧模型并将其重命名为“备份”-这有助于使用相同模型的相同版本进行备份,而不是使用相同模型,因为您可以为它们提供相同的主键和备用键相同:

class Backup(db.Model): # example
    prop1 = db.StringProperty()
    prop2 = db.StringListProperty()
    prop3 = db.StringProperty()

Next setup a cron job in your cron.yaml: 接下来在您的cron.yaml中设置一个cron作业:

- description: Creates a backup of the target db every 24 hours at 10:45 GMT
url: /backup
schedule: everyday 10:45

Add /backup to your app.yaml handlers: 将/ backup添加到您的app.yaml处理程序中:

- url: /backup
script: mybackup.py
login: admin

Finally, create mybackup.py 最后,创建mybackup.py

from google.appengine.ext import deferred
from google.appengine.ext import db
#from google.appengine.ext import ndb

def backup_my_model(model_name):
    """
    Takes all enities in the model_name model and copies it to Backup model
    """
    logging.info("Backing up %s" % model_name)
    query = db.GqlQuery('SELECT * From %s ' % model_name)
    for primary_db in query:
        backup = Backup(key_name = primary_db.key_name)
        backup.prop1 = primary_db.prop1
        backup.prop2 = primary_db.prop2
        ...
        backup.put()



deferred.defer(backup_my_model, MyModel) #where MyModel is the model you want to backup
deferred.defer(backup_my_model, MyOtherModel)
...
deferred.defer(backup_my_model, MyFinalModel)

I hope that helps. 希望对您有所帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 App Engine:API调用文件。Create()花费的时间太长,因此已被取消 - App Engine: The API call file.Create() took too long to respond and was cancelled GAE数据存储区获取时间是否太长? - GAE datastore fetch taking too long? 如果出现错误,如何重试 urlfetch.fetch 几次? - How to retry urlfetch.fetch a few more times in case of error? Django Heroku,无法访问此站点,响应时间过长 - Django Heroku, This site can’t be reached, Took too long to respond urlfetch无法在GAE中获取URL - urlfetch unable to fetch URL in GAE urlfetch.fetch在应用程序引擎中的html脚本标记中缺少内容 - urlfetch.fetch got missed content within script tag of html in app engine 在 Python GAE 中使用 URLFetch 获取完整文档 - Using URLFetch in Python GAE to fetch a complete document 使用deferred.defer在GAE中发生的零星错误-RequestTooLargeError:对API调用datastore_v3.Put()的请求太大 - Sporadic Error in GAE using deferred.defer - RequestTooLargeError: The request to API call datastore_v3.Put() was too large Mlagents-learn train:“Unity 环境响应时间太长。” - Mlagents-learn train: "The Unity environment took too long to respond." API调用的响应时间太长。 我如何通过此函数传递控件,稍后再返回 - API call is taking too long to respond. How can I pass the control from this function and come back again later
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM