简体   繁体   English

Firebase Cloud Function with Firestore 返回“Deadline Exceeded”

[英]Firebase Cloud Function with Firestore returning "Deadline Exceeded"

I took one of the sample functions from the Firestore documentation and was able to successfully run it from my local firebase environment.我从 Firestore 文档中获取了一个示例函数,并且能够从我的本地 firebase 环境中成功运行它。 However, once I deployed to my firebase server, the function completes, but no entries are made in the firestore database.但是,一旦我部署到我的 firebase 服务器,该功能就会完成,但不会在 firestore 数据库中创建任何条目。 The firebase function logs show "Deadline Exceeded." firebase 函数日志显示“Deadline Exceeded”。 I'm a bit baffled.我有点困惑。 Anyone know why this is happening and how to resolve this?任何人都知道为什么会发生这种情况以及如何解决这个问题?

Here is the sample function:这是示例函数:

exports.testingFunction = functions.https.onRequest((request, response) => {
var data = {
    name: 'Los Angeles',
    state: 'CA',
    country: 'USA'
};

// Add a new document in collection "cities" with ID 'DC'
var db = admin.firestore();
var setDoc = db.collection('cities').doc('LA').set(data);

response.status(200).send();
});

Firestore has limits. Firestore 有限制。

Probably “Deadline Exceeded” happens because of its limits. “超过截止日期”可能是由于其限制而发生的。

See this.看到这个。 https://firebase.google.com/docs/firestore/quotas https://firebase.google.com/docs/firestore/quotas

Maximum write rate to a document 1 per second每秒 1 个文档的最大写入速率

https://groups.google.com/forum/#!msg/google-cloud-firestore-discuss/tGaZpTWQ7tQ/NdaDGRAzBgAJ https://groups.google.com/forum/#!msg/google-cloud-firestore-discuss/tGaZpTWQ7tQ/NdaDGRAzBgAJ

I have written this little script which uses batch writes (max 500) and only write one batch after the other.我写了这个小脚本,它使用批量写入(最多 500 个)并且只写一个批次。

use it by first creating a batchWorker let batch: any = new FbBatchWorker(db);使用它首先创建一个batchWorker let batch: any = new FbBatchWorker(db); Then add anything to the worker batch.set(ref.doc(docId), MyObject);然后将任何内容添加到工作人员batch.set(ref.doc(docId), MyObject); . . And finish it via batch.commit() .并通过batch.commit()完成它。 The api is the same as for the normal Firestore Batch ( https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes ) However, currently it only supports set . api 与普通 Firestore Batch ( https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes ) 相同,但是,目前它只支持set

import { firestore } from "firebase-admin";

class FBWorker {
    callback: Function;

    constructor(callback: Function) {
        this.callback = callback;
    }

    work(data: {
        type: "SET" | "DELETE";
        ref: FirebaseFirestore.DocumentReference;
        data?: any;
        options?: FirebaseFirestore.SetOptions;
    }) {
        if (data.type === "SET") {
            // tslint:disable-next-line: no-floating-promises
            data.ref.set(data.data, data.options).then(() => {
                this.callback();
            });
        } else if (data.type === "DELETE") {
            // tslint:disable-next-line: no-floating-promises
            data.ref.delete().then(() => {
                this.callback();
            });
        } else {
            this.callback();
        }
    }
}

export class FbBatchWorker {
    db: firestore.Firestore;
    batchList2: {
        type: "SET" | "DELETE";
        ref: FirebaseFirestore.DocumentReference;
        data?: any;
        options?: FirebaseFirestore.SetOptions;
    }[] = [];
    elemCount: number = 0;
    private _maxBatchSize: number = 490;

    public get maxBatchSize(): number {
        return this._maxBatchSize;
    }
    public set maxBatchSize(size: number) {
        if (size < 1) {
            throw new Error("Size must be positive");
        }

        if (size > 490) {
            throw new Error("Size must not be larger then 490");
        }

        this._maxBatchSize = size;
    }

    constructor(db: firestore.Firestore) {
        this.db = db;
    }

    async commit(): Promise<any> {
        const workerProms: Promise<any>[] = [];
        const maxWorker = this.batchList2.length > this.maxBatchSize ? this.maxBatchSize : this.batchList2.length;
        for (let w = 0; w < maxWorker; w++) {
            workerProms.push(
                new Promise((resolve) => {
                    const A = new FBWorker(() => {
                        if (this.batchList2.length > 0) {
                            A.work(this.batchList2.pop());
                        } else {
                            resolve();
                        }
                    });

                    // tslint:disable-next-line: no-floating-promises
                    A.work(this.batchList2.pop());
                }),
            );
        }

        return Promise.all(workerProms);
    }

    set(dbref: FirebaseFirestore.DocumentReference, data: any, options?: FirebaseFirestore.SetOptions): void {
        this.batchList2.push({
            type: "SET",
            ref: dbref,
            data,
            options,
        });
    }

    delete(dbref: FirebaseFirestore.DocumentReference) {
        this.batchList2.push({
            type: "DELETE",
            ref: dbref,
        });
    }
}

In my own experience, this problem can also happen when you try to write documents using a bad internet connection.根据我自己的经验,当您尝试使用不良的互联网连接编写文档时,也会发生此问题。

I use a solution similar to Jurgen's suggestion to insert documents in batch smaller than 500 at once, and this error appears if I'm using a not so stable wifi connection.我使用类似于 Jurgen 建议的解决方案一次性插入小于 500 的文档,如果我使用不太稳定的 wifi 连接,则会出现此错误。 When I plug in the cable, the same script with the same data runs without errors.当我插入电缆时,具有相同数据的相同脚本运行而没有错误。

If the error is generate after around 10 seconds, probably it's not your internet connetion, it might be that your functions are not returning any promise.如果错误在大约 10 秒后生成,则可能不是您的 Internet 连接,可能是您的函数没有返回任何承诺。 In my experience I got the error simply because I had wrapped a firebase set operation(which returns a promise) inside another promise.根据我的经验,我收到错误只是因为我在另一个承诺中包装了一个 firebase 设置操作(它返回一个承诺)。 You can do this你可以这样做

return db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
        var SuccessResponse = {
            "code": "200"
        }

        var resp = JSON.stringify(SuccessResponse);
        return resp;
    }).catch(err => {
        console.log('Quiz Error OCCURED ', err);
        var FailureResponse = {
            "code": "400",
        }

        var resp = JSON.stringify(FailureResponse);
        return resp;
    });

instead of代替

return new Promise((resolve,reject)=>{ 
    db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
        var SuccessResponse = {
            "code": "200"
        }

        var resp = JSON.stringify(SuccessResponse);
        return resp;
    }).catch(err => {
        console.log('Quiz Error OCCURED ', err);
        var FailureResponse = {
            "code": "400",
        }

        var resp = JSON.stringify(FailureResponse);
        return resp;
    });

});

I tested this, by having 15 concurrent AWS Lambda functions writing 10,000 requests into the database into different collections / documents milliseconds part.我测试了这一点,通过让 15 个并发 AWS Lambda 函数将 10,000 个请求写入数据库到不同的集合/文档毫秒部分。 I did not get the DEADLINE_EXCEEDED error.我没有收到DEADLINE_EXCEEDED错误。

Please see the documentation on firebase .请参阅有关firebase的文档。

'deadline-exceeded': Deadline expired before operation could complete. 'deadline-exceeded':在操作完成之前截止日期已过。 For operations that change the state of the system, this error may be returned even if the operation has completed successfully.对于改变系统状态的操作,即使操作成功完成,也可能返回此错误。 For example, a successful response from a server could have been delayed long enough for the deadline to expire.例如,来自服务器的成功响应可能已经延迟了足够长的截止时间。

In our case we are writing a small amount of data and it works most of the time but loosing data is unacceptable.在我们的例子中,我们正在写入少量数据,它大部分时间都可以工作,但丢失数据是不可接受的。 I have not concluded why Firestore fails to write in simple small bits of data.我还没有得出为什么 Firestore 无法写入简单的少量数据的结论。

SOLUTION:解决方案:

I am using an AWS Lambda function that uses an SQS event trigger.我正在使用使用 SQS 事件触发器的 AWS Lambda 函数。

  # This function receives requests from the queue and handles them
  # by persisting the survey answers for the respective users.
  QuizAnswerQueueReceiver:
    handler: app/lambdas/quizAnswerQueueReceiver.handler
    timeout: 180 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
    reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit    
    events:
      - sqs:
          batchSize: 10 # Wait for 10 messages before processing.
          maximumBatchingWindow: 60 # The maximum amount of time in seconds to gather records before invoking the function
          arn:
            Fn::GetAtt:
              - SurveyAnswerReceiverQueue
              - Arn
    environment:
      NODE_ENV: ${self:custom.myStage}

I am using a dead letter queue connected to my main queue for failed events.我正在使用连接到我的主队列的死信队列来处理失败的事件。

  Resources:
    QuizAnswerReceiverQueue:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: ${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}
        # VisibilityTimeout MUST be greater than the lambda functions timeout https://lumigo.io/blog/sqs-and-lambda-the-missing-guide-on-failure-modes/

        # The length of time during which a message will be unavailable after a message is delivered from the queue.
        # This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.
        VisibilityTimeout: 900 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.

        # The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1,209,600 seconds (14 days).
        MessageRetentionPeriod: 345600  # The number of seconds that Amazon SQS retains a message. 
        RedrivePolicy:
          deadLetterTargetArn:
            "Fn::GetAtt":
              - QuizAnswerReceiverQueueDLQ
              - Arn
          maxReceiveCount: 5 # The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
    QuizAnswerReceiverQueueDLQ:
      Type: "AWS::SQS::Queue"
      Properties:
        QueueName: "${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}DLQ"
        MessageRetentionPeriod: 1209600 # 14 days in seconds

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 错误:已超过期限 - 调用 firebase 可调用云 function (onCall) 时。 onRequest 工作正常 - Error: deadline-exceeded - when calling a firebase callable cloud function (onCall). onRequest works fine 错误:使用 firebase 云函数时已超过期限 - Error: deadline-exceeded when working with firebase cloud functions Flutter 调用云 function 正在解决 DEADLINE_EXCEEDED 云 function 异常 - Flutter call to Cloud function is resolving in DEADLINE_EXCEEDED cloud function exception Firebase Cloud功能用于在firestore创建时调整图像大小 - Firebase Cloud Function to resize image on firestore create Firebase 云功能不更新 Firestore 数据库中的数据 - Firebase Cloud function not updating data in Firestore database Firestore Cloud Function:出现“ReferenceError:firebase 未定义” - Firestore Cloud Function: Getting “ReferenceError: firebase is not defined” 尝试初始化 Cloud Firestore 时,firebase.firestore() 不是函数 - firebase.firestore() is not a function when trying to initialize Cloud Firestore Firebase Cloud Function:Cloud Firestore 查询无效,即使数据在 Cloud Firestore 中 - Firebase Cloud Function : Cloud Firestore query invalid eventhough data is in Cloud Firestore Firebase云消息传递功能返回未定义 - Firebase cloud messaging function returning undefined firebase onCall 云函数不返回结果 - firebase onCall cloud function not returning the results
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM