簡體   English   中英

腳本運行時執行時間限制

[英]Script runtime execution time limit

我的 Google Apps 腳本正在遍歷用戶的 Google 雲端硬盤文件,並將文件復制並有時將文件移動到其他文件夾。 腳本總是在幾分鍾后停止,日志中沒有錯誤消息。


編者注:時間限制隨着時間的推移而變化,並且可能在“消費者”(免費)和“工作區”(付費)帳戶之間有所不同,但截至 2022 年 12 月,大多數答案仍然有效。


我在一次運行中對數十個或有時數千個文件進行排序。

是否有任何設置或解決方法?

您可以做的一件事(這當然取決於您要完成的工作)是:

  1. 將必要的信息(如循環計數器)存儲在電子表格或其他永久存儲(即 ScriptProperties)中。
  2. 讓您的腳本每五分鍾左右終止一次。
  3. 設置時間驅動觸發器,每五分鍾運行一次腳本(或使用腳本服務以編程方式創建觸發器)。
  4. 在每次運行時,從您使用的永久存儲中讀取保存的數據,並從它停止的地方繼續運行腳本。

這不是一刀切的解決方案,如果您發布代碼,人們將能夠更好地為您提供幫助。

這是我每天使用的腳本的簡化代碼摘錄:

function runMe() {
  var startTime= (new Date()).getTime();
  
  //do some work here
  
  var scriptProperties = PropertiesService.getScriptProperties();
  var startRow= scriptProperties.getProperty('start_row');
  for(var ii = startRow; ii <= size; ii++) {
    var currTime = (new Date()).getTime();
    if(currTime - startTime >= MAX_RUNNING_TIME) {
      scriptProperties.setProperty("start_row", ii);
      ScriptApp.newTrigger("runMe")
               .timeBased()
               .at(new Date(currTime+REASONABLE_TIME_TO_WAIT))
               .create();
      break;
    } else {
      doSomeWork();
    }
  }
  
  //do some more work here
  
}

注意#1:變量REASONABLE_TIME_TO_WAIT應該足夠大以觸發新觸發器。 (我將它設置為 5 分鍾,但我認為它可能比這更短)。

注意#2: doSomeWork()必須是一個執行速度相對較快的函數(我會說不到 1 分鍾)。

注意#3:Google 已經棄用了Script Properties ,並引入了Properties Service來代替它。 該功能已相應修改。

NOTE#4:第二次調用該函數時,它將for循環的第i個值作為字符串。 所以你必須把它轉換成一個整數

配額

單個腳本的最長執行時間為 6 分鍾/執行
- https://developers.google.com/apps-script/guides/services/quotas

但是,您還需要熟悉其他一些限制。 例如,您只允許每天 1 小時的總觸發器運行時間,因此您不能將一個長函數分解為 12 個不同的 5 分鍾塊。

優化

也就是說,您真的需要花六分鍾來執行的原因很少。 JavaScript 在幾秒鍾內對數千行數據進行排序應該沒有問題。 可能會損害您的性能的是對 Google Apps 本身的服務調用。

您可以編寫腳本以最大限度地利用內置緩存,通過最小化讀取和寫入次數。 交替讀取和寫入命令很慢。 為了加速腳本,用一個命令將所有數據讀入一個數組,對數組中的數據執行任何操作,然后用一個命令將數據寫出。
- https://developers.google.com/apps-script/best_practices

批處理

您能做的最好的事情就是減少服務電話的數量。 Google 通過允許大多數 API 調用的批處理版本來實現這一點。

作為一個簡單的例子,而不是這個

for (var i = 1; i <= 100; i++) {
  SpreadsheetApp.getActiveSheet().deleteRow(i);
}

這樣做

SpreadsheetApp.getActiveSheet().deleteRows(i, 100);

在第一個循環中,您不僅需要對工作表上的 deleteRow 調用 100 次,而且還需要獲取活動工作表 100 次。 第二個變體應該比第一個變體表現好幾個數量級。

交織讀寫

此外,您還應該非常小心,不要在閱讀和寫作之間頻繁地來回切換。 您不僅會失去批量操作的潛在收益,而且 Google 將無法使用其內置緩存。

每次讀取時,我們必須首先清空(提交)寫入緩存以確保您正在讀取最新數據(您可以通過調用SpreadsheetApp.flush()強制寫入緩存)。 同樣,每次寫入時,我們都必須丟棄讀取緩存,因為它不再有效。 因此,如果您可以避免交錯讀取和寫入,您將充分利用緩存。
- http://googleappsscript.blogspot.com/2010/06/optimizing-spreadsheet-operations.html

例如,而不是這個

sheet.getRange("A1").setValue(1);
sheet.getRange("B1").setValue(2);
sheet.getRange("C1").setValue(3);
sheet.getRange("D1").setValue(4);

這樣做

sheet.getRange("A1:D1").setValues([[1,2,3,4]]);

鏈接函數調用

作為最后的手段,如果您的函數確實無法在 6 分鍾內完成,您可以將調用鏈接在一起或分解您的函數以處理更小的數據段。

您可以將數據存儲在緩存服務(臨時)或屬性服務(永久)存儲桶中,以便在執行中檢索(因為 Google Apps 腳本具有無狀態執行)。

如果您想開始另一個事件,您可以使用Trigger Builder Class創建您自己的觸發器,或者在緊迫的時間表上設置重復觸發器。

此外,盡量減少對谷歌服務的調用量。 例如,如果您想更改電子表格中的一系列單元格,請不要讀取每個單元格,而是對其進行變異並將其存儲回去。 而是將整個范圍(使用Range.getValues() )讀入內存,對其進行變異並立即存儲所有范圍(使用Range.setValues() )。

這應該會為您節省大量的執行時間。

Anton Soradoi 的回答似乎不錯,但請考慮使用緩存服務而不是將數據存儲到臨時表中。

 function getRssFeed() {
   var cache = CacheService.getPublicCache();
   var cached = cache.get("rss-feed-contents");
   if (cached != null) {
     return cached;
   }
   var result = UrlFetchApp.fetch("http://example.com/my-slow-rss-feed.xml"); // takes 20 seconds
   var contents = result.getContentText();
   cache.put("rss-feed-contents", contents, 1500); // cache for 25 minutes
   return contents;
 }

另請注意,截至 2014 年 4 月,腳本運行時間限制為 6 分鍾


G Suite 商務版/企業版/教育版和搶先體驗版用戶:

截至 2018 年 8 月,這些用戶的最大腳本運行時間現已設置為 30 分鍾。

想出一種方法來拆分您的工作,使其花費不到 6 分鍾,因為這是任何腳本的限制。 在第一遍時,您可以在電子表格中迭代和存儲文件和文件夾列表,並為第 2 部分添加時間驅動觸發器。

在第 2 部分中,在處理列表時刪除列表中的每個條目。 當列表中沒有項目時,刪除觸發器。

這就是我處理一張大約 1500 行的工作表的方式,這些工作表分散到大約十幾個不同的電子表格中。 由於對電子表格的調用次數過多,它會超時,但會在觸發器再次運行時繼續運行。

如果您使用的是 G Suite 商務版或企業版。 您可以在 App Maker 啟用后為 App Maker 注冊搶先體驗,您的腳本運行時間會將運行時間從 6 分鍾增加到 30 分鍾:)

有關應用程序制造商的更多詳細信息,請單擊此處

在循環處理大量信息時,我使用了 ScriptDB 來保存我的位置。 腳本可以/確實超過 5 分鍾限制。 通過在每次運行期間更新 ScriptDb,腳本可以從 db 讀取狀態並從它停止的地方開始直到所有處理完成。 試試這個策略,我想你會對結果感到滿意。

我開發了一個Google Apps腳本庫,它使用UserProperties和編程觸發器來運行批量超過6分鍾。 您可以在GAS項目中導入此庫,並使用API​​包裝代碼,以便它可以運行FOREVER(實際上,我們受限於與觸發器可以運行的小時數相關的配額)

您可以在此處了解所有相關信息: http//patt0.blogspot.in/2014/08/continuous-batch-library-update-for.html

這是一種非常基於Dmitry Kostyuk關於該主題的絕對出色文章的方法。

它的不同之處在於它不會嘗試計時執行並優雅地退出。 相反,它故意每分鍾產生一個新線程,並讓它們運行,直到它們被 Google 超時。 這繞過了最大執行時間限制,並通過在多個線程中並行運行處理來加快速度。 (即使您沒有達到執行時間限制,這也會加快速度。)

它在腳本屬性中跟蹤任務狀態,加上一個信號量,以確保在任何時候都沒有兩個線程正在編輯任務狀態。 (它使用多個屬性,因為每個屬性限制為 9k。)

我試圖模仿 Google Apps Script iterator.next() API,但不能使用iterator.hasNext()因為它不是線程安全的(請參閱TOCTOU )。 它在底部使用了幾個外觀類。

如有任何建議,我將不勝感激。 這對我來說效果很好,通過生成三個並行線程來運行文檔目錄,將處理時間減半。 您可以在配額內生成 20 個,但這對於我的用例來說已經足夠了。

該類旨在插入,無需修改即可用於任何目的。 用戶必須做的唯一一件事是在處理文件時,刪除之前超時嘗試的任何輸出。 如果處理任務在完成之前被 Google 超時,迭代器將fileId返回給定的fileId

為了使日志靜音,它全部通過底部的log()函數。

這是你如何使用它:

const main = () => {
  const srcFolder = DriveApp.getFoldersByName('source folder',).next()
  const processingMessage = processDocuments(srcFolder, 'spawnConverter')
  log('main() finished with message', processingMessage)
}

const spawnConverter = e => {
  const processingMessage = processDocuments()
  log('spawnConverter() finished with message', processingMessage)
}

const processDocuments = (folder = null, spawnFunction = null) => {
  // folder and spawnFunction are only passed the first time we trigger this function,
  // threads spawned by triggers pass nothing.
  // 10,000 is the maximum number of milliseconds a file can take to process.
  const pfi = new ParallelFileIterator(10000, MimeType.GOOGLE_DOCS, folder, spawnFunction)
  let fileId = pfi.nextId()
  const doneDocs = []
  while (fileId) {
    const fileRelativePath = pfi.getFileRelativePath(fileId)
    const doc = DocumentApp.openById(fileId)
    const mc = MarkupConverter(doc)

    // This is my time-consuming task:
    const mdContent = mc.asMarkdown(doc)

    pfi.completed(fileId)
    doneDocs.push([...fileRelativePath, doc.getName() + '.md'].join('/'))
    fileId = pfi.nextId()
  }
  return ('This thread did:\r' + doneDocs.join('\r'))
}

這是代碼:

const ParallelFileIterator = (function() {
  /**
  * Scans a folder, depth first, and returns a file at a time of the given mimeType.
  * Uses ScriptProperties so that this class can be used to process files by many threads in parallel.
  * It is the responsibility of the caller to tidy up artifacts left behind by processing threads that were timed out before completion.
  * This class will repeatedly dispatch a file until .completed(fileId) is called.
  * It will wait maxDurationOneFileMs before re-dispatching a file.
  * Note that Google Apps kills scripts after 6 mins, or 30 mins if you're using a Workspace account, or 45 seconds for a simple trigger, and permits max 30  
  * scripts in parallel, 20 triggers per script, and 90 mins or 6hrs of total trigger runtime depending if you're using a Workspace account.
  * Ref: https://developers.google.com/apps-script/guides/services/quotas
  maxDurationOneFileMs, mimeType, parentFolder=null, spawnFunction=null
  * @param {Number} maxDurationOneFileMs A generous estimate of the longest a file can take to process.
  * @param {string} mimeType The mimeType of the files required.
  * @param {Folder} parentFolder The top folder containing all the files to process. Only passed in by the first thread. Later spawned threads pass null (the files have already been listed and stored in properties).
  * @param {string} spawnFunction The name of the function that will spawn new processing threads. Only passed in by the first thread. Later spawned threads pass null (a trigger can't create a trigger).
  */
  class ParallelFileIterator {
    constructor(
      maxDurationOneFileMs,
      mimeType,
      parentFolder = null,
      spawnFunction = null,
    ) {
      log(
        'Enter ParallelFileIterator constructor',
        maxDurationOneFileMs,
        mimeType,
        spawnFunction,
        parentFolder ? parentFolder.getName() : null,
      )

      // singleton
      if (ParallelFileIterator.instance) return ParallelFileIterator.instance

      if (parentFolder) {
        _cleanUp()
        const t0 = Now.asTimestamp()
        _getPropsLock(maxDurationOneFileMs)
        const t1 = Now.asTimestamp()
        const { fileIds, fileRelativePaths } = _catalogFiles(
          parentFolder,
          mimeType,
        )
        const t2 = Now.asTimestamp()
        _setQueues(fileIds, [])
        const t3 = Now.asTimestamp()
        this.fileRelativePaths = fileRelativePaths
        ScriptProps.setAsJson(_propsKeyFileRelativePaths, fileRelativePaths)
        const t4 = Now.asTimestamp()
        _releasePropsLock()
        const t5 = Now.asTimestamp()
        if (spawnFunction) {
          // only triggered on the first thread
          const trigger = Trigger.create(spawnFunction, 1)
          log(
            `Trigger once per minute: UniqueId: ${trigger.getUniqueId()}, EventType: ${trigger.getEventType()}, HandlerFunction: ${trigger.getHandlerFunction()}, TriggerSource: ${trigger.getTriggerSource()}, TriggerSourceId: ${trigger.getTriggerSourceId()}.`,
          )
        }
        log(
          `PFI instantiated for the first time, has found ${
            fileIds.length
          } documents to process. getPropsLock took ${t1 -
            t0}ms, _catalogFiles took ${t2 - t1}ms, setQueues took ${t3 -
            t2}ms, setAsJson took ${t4 - t3}ms, releasePropsLock took ${t5 -
            t4}ms, trigger creation took ${Now.asTimestamp() - t5}ms.`,
        )
      } else {
        const t0 = Now.asTimestamp()
        // wait for first thread to set up Properties
        while (!ScriptProps.getJson(_propsKeyFileRelativePaths)) {
          Utilities.sleep(250)
        }
        this.fileRelativePaths = ScriptProps.getJson(_propsKeyFileRelativePaths)
        const t1 = Now.asTimestamp()
        log(
          `PFI instantiated again to run in parallel. getJson(paths) took ${t1 -
            t0}ms`,
        )
        spawnFunction
      }

      _internals.set(this, { maxDurationOneFileMs: maxDurationOneFileMs })
      // to get: _internal(this, 'maxDurationOneFileMs')

      ParallelFileIterator.instance = this
      return ParallelFileIterator.instance
    }

    nextId() {
      // returns false if there are no more documents

      const maxDurationOneFileMs = _internals.get(this).maxDurationOneFileMs
      _getPropsLock(maxDurationOneFileMs)
      let { pending, dispatched } = _getQueues()
      log(
        `PFI.nextId: ${pending.length} files pending, ${
          dispatched.length
        } dispatched, ${Object.keys(this.fileRelativePaths).length -
          pending.length -
          dispatched.length} completed.`,
      )
      if (pending.length) {
        // get first pending Id, (ie, deepest first)
        const nextId = pending.shift()
        dispatched.push([nextId, Now.asTimestamp()])
        _setQueues(pending, dispatched)
        _releasePropsLock()
        return nextId
      } else if (dispatched.length) {
        log(`PFI.nextId: Get first dispatched Id, (ie, oldest first)`)
        let startTime = dispatched[0][1]
        let timeToTimeout = startTime + maxDurationOneFileMs - Now.asTimestamp()
        while (dispatched.length && timeToTimeout > 0) {
          log(
            `PFI.nextId: None are pending, and the oldest dispatched one hasn't yet timed out, so wait ${timeToTimeout}ms to see if it will`,
          )
          _releasePropsLock()
          Utilities.sleep(timeToTimeout + 500)
          _getPropsLock(maxDurationOneFileMs)
          ;({ pending, dispatched } = _getQueues())
          if (pending && dispatched) {
            if (dispatched.length) {
              startTime = dispatched[0][1]
              timeToTimeout =
                startTime + maxDurationOneFileMs - Now.asTimestamp()
            }
          }
        }
        // We currently still have the PropsLock
        if (dispatched.length) {
          const nextId = dispatched.shift()[0]
          log(
            `PFI.nextId: Document id ${nextId} has timed out; reset start time, move to back of queue, and re-dispatch`,
          )
          dispatched.push([nextId, Now.asTimestamp()])
          _setQueues(pending, dispatched)
          _releasePropsLock()
          return nextId
        }
      }
      log(`PFI.nextId: Both queues empty, all done!`)
      ;({ pending, dispatched } = _getQueues())
      if (pending.length || dispatched.length) {
        log(
          "ERROR: All documents should be completed, but they're not. Giving up.",
          pending,
          dispatched,
        )
      }
      _cleanUp()
      return false
    }

    completed(fileId) {
      _getPropsLock(_internals.get(this).maxDurationOneFileMs)
      const { pending, dispatched } = _getQueues()
      const newDispatched = dispatched.filter(el => el[0] !== fileId)
      if (dispatched.length !== newDispatched.length + 1) {
        log(
          'ERROR: A document was completed, but not found in the dispatched list.',
          fileId,
          pending,
          dispatched,
        )
      }
      if (pending.length || newDispatched.length) {
        _setQueues(pending, newDispatched)
        _releasePropsLock()
      } else {
        log(`PFI.completed: Both queues empty, all done!`)
        _cleanUp()
      }
    }

    getFileRelativePath(fileId) {
      return this.fileRelativePaths[fileId]
    }
  }

  // ============= PRIVATE MEMBERS ============= //

  const _propsKeyLock = 'PropertiesLock'
  const _propsKeyDispatched = 'Dispatched'
  const _propsKeyPending = 'Pending'
  const _propsKeyFileRelativePaths = 'FileRelativePaths'

  // Not really necessary for a singleton, but in case code is changed later
  var _internals = new WeakMap()

  const _cleanUp = (exceptProp = null) => {
    log('Enter _cleanUp', exceptProp)
    Trigger.deleteAll()
    if (exceptProp) {
      ScriptProps.deleteAllExcept(exceptProp)
    } else {
      ScriptProps.deleteAll()
    }
  }

  const _catalogFiles = (folder, mimeType, relativePath = []) => {
    // returns IDs of all matching files in folder, depth first
    log(
      'Enter _catalogFiles',
      folder.getName(),
      mimeType,
      relativePath.join('/'),
    )
    let fileIds = []
    let fileRelativePaths = {}
    const folders = folder.getFolders()
    let subFolder
    while (folders.hasNext()) {
      subFolder = folders.next()
      const results = _catalogFiles(subFolder, mimeType, [
        ...relativePath,
        subFolder.getName(),
      ])
      fileIds = fileIds.concat(results.fileIds)
      fileRelativePaths = { ...fileRelativePaths, ...results.fileRelativePaths }
    }
    const files = folder.getFilesByType(mimeType)
    while (files.hasNext()) {
      const fileId = files.next().getId()
      fileIds.push(fileId)
      fileRelativePaths[fileId] = relativePath
    }
    return { fileIds: fileIds, fileRelativePaths: fileRelativePaths }
  }

  const _getQueues = () => {
    const pending = ScriptProps.getJson(_propsKeyPending)
    const dispatched = ScriptProps.getJson(_propsKeyDispatched)
    log('Exit _getQueues', pending, dispatched)
    // Note: Empty lists in Javascript are truthy, but if Properties have been deleted by another thread they'll be null here, which are falsey
    return { pending: pending || [], dispatched: dispatched || [] }
  }
  const _setQueues = (pending, dispatched) => {
    log('Enter _setQueues', pending, dispatched)
    ScriptProps.setAsJson(_propsKeyPending, pending)
    ScriptProps.setAsJson(_propsKeyDispatched, dispatched)
  }

  const _getPropsLock = maxDurationOneFileMs => {
    // will block until lock available or lock times out (because a script may be killed while holding a lock)
    const t0 = Now.asTimestamp()
    while (
      ScriptProps.getNum(_propsKeyLock) + maxDurationOneFileMs >
      Now.asTimestamp()
    ) {
      Utilities.sleep(2000)
    }
    ScriptProps.set(_propsKeyLock, Now.asTimestamp())
    log(`Exit _getPropsLock: took ${Now.asTimestamp() - t0}ms`)
  }
  const _releasePropsLock = () => {
    ScriptProps.delete(_propsKeyLock)
    log('Exit _releasePropsLock')
  }

  return ParallelFileIterator
})()

const log = (...args) => {
  // easier to turn off, json harder to read but easier to hack with
  console.log(args.map(arg => JSON.stringify(arg)).join(';'))
}

class Trigger {
  // Script triggering facade

  static create(functionName, everyMinutes) {
    return ScriptApp.newTrigger(functionName)
      .timeBased()
      .everyMinutes(everyMinutes)
      .create()
  }
  static delete(e) {
    if (typeof e !== 'object') return log(`${e} is not an event object`)
    if (!e.triggerUid)
      return log(`${JSON.stringify(e)} doesn't have a triggerUid`)
    ScriptApp.getProjectTriggers().forEach(trigger => {
      if (trigger.getUniqueId() === e.triggerUid) {
        log('deleting trigger', e.triggerUid)
        return ScriptApp.delete(trigger)
      }
    })
  }
  static deleteAll() {
    // Deletes all triggers in the current project.
    var triggers = ScriptApp.getProjectTriggers()
    for (var i = 0; i < triggers.length; i++) {
      ScriptApp.deleteTrigger(triggers[i])
    }
  }
}

class ScriptProps {
  // properties facade
  static set(key, value) {
    if (value === null || value === undefined) {
      ScriptProps.delete(key)
    } else {
      PropertiesService.getScriptProperties().setProperty(key, value)
    }
  }
  static getStr(key) {
    return PropertiesService.getScriptProperties().getProperty(key)
  }
  static getNum(key) {
    // missing key returns Number(null), ie, 0
    return Number(ScriptProps.getStr(key))
  }
  static setAsJson(key, value) {
    return ScriptProps.set(key, JSON.stringify(value))
  }
  static getJson(key) {
    return JSON.parse(ScriptProps.getStr(key))
  }
  static delete(key) {
    PropertiesService.getScriptProperties().deleteProperty(key)
  }
  static deleteAll() {
    PropertiesService.getScriptProperties().deleteAllProperties()
  }
  static deleteAllExcept(key) {
    PropertiesService.getScriptProperties()
      .getKeys()
      .forEach(curKey => {
        if (curKey !== key) ScriptProps.delete(key)
      })
  }
}

如果您是企業客戶,現在可以注冊應用制作工具的搶先體驗,其中包括靈活配額

在靈活配額制度下,此類硬配額限制被取消。 腳本在達到配額限制時不會停止。 相反,它們會被延遲,直到配額可用,此時腳本執行將恢復。 一旦開始使用配額,它們就會以正常速度重新填充。 為了合理使用,腳本延遲很少見。

如果您將 G Suite 作為企業、企業或 EDU客戶使用,則運行腳本的執行時間設置為:

30 分鍾/執行

請參閱: https : //developers.google.com/apps-script/guides/services/quotas

這個想法是從腳本中優雅地退出,保存你的進度,創建一個觸發器從你離開的地方重新開始,根據需要重復多次,然后一旦完成清理觸發器和任何臨時文件。

這是關於這個主題的詳細文章

正如許多人提到的,這個問題的通用解決方案是跨多個會話執行您的方法。 我發現這是一個常見的問題,我有一堆迭代需要循環,我不想編寫/維護創建新會話的樣板的麻煩。

因此我創建了一個通用解決方案:

/**
 * Executes the given function across multiple sessions to ensure there are no timeouts.
 *
 * See https://stackoverflow.com/a/71089403.
 * 
 * @param {Int} items - The items to iterate over.
 * @param {function(Int)} fn - The function to execute each time. Takes in an item from `items`.
 * @param {String} resumeFunctionName - The name of the function (without arguments) to run between sessions. Typically this is the same name of the function that called this method.
 * @param {Int} maxRunningTimeInSecs - The maximum number of seconds a script should be able to run. After this amount, it will start a new session. Note: This must be set to less than the actual timeout as defined in https://developers.google.com/apps-script/guides/services/quotas (e.g. 6 minutes), otherwise it can't set up the next call.
 * @param {Int} timeBetweenIterationsInSeconds - The amount of time between iterations of sessions. Note that Google Apps Script won't honor this 100%, as if you choose a 1 second delay, it may actually take a minute or two before it actually executes.
 */
function iterateAcrossSessions(items, fn, resumeFunctionName, maxRunningTimeInSeconds = 5 * 60, timeBetweenIterationsInSeconds = 1) {
  const PROPERTY_NAME = 'iterateAcrossSessions_index';
  let scriptProperties = PropertiesService.getScriptProperties();
  let startTime = (new Date()).getTime();

  let startIndex = parseInt(scriptProperties.getProperty(PROPERTY_NAME));
  if (Number.isNaN(startIndex)) {
    startIndex = 0;
  }

  for (let i = startIndex; i < items.length; i++) {
    console.info(`[iterateAcrossSessions] Executing for i = ${i}.`)
    fn(items[i]);

    let currentTime = (new Date()).getTime();
    let elapsedTime = currentTime - startTime;
    let maxRunningTimeInMilliseconds = maxRunningTimeInSeconds * 1000;
    if (maxRunningTimeInMilliseconds <= elapsedTime) {
      let newTime = new Date(currentTime + timeBetweenIterationsInSeconds * 1000);
      console.info(`[iterateAcrossSessions] Creating new session for i = ${i+1} at ${newTime}, since elapsed time was ${elapsedTime}.`);
      scriptProperties.setProperty(PROPERTY_NAME, i+1);
      ScriptApp.newTrigger(resumeFunctionName).timeBased().at(newTime).create();
      return;
    }
  }

  console.log(`[iterateAcrossSessions] Done iterating over items.`);
  // Reset the property here to ensure that the execution loop could be restarted.
  scriptProperties.deleteProperty(PROPERTY_NAME);
}

您現在可以像這樣輕松地使用它:

let ITEMS = ['A', 'B', 'C'];

function execute() {
  iterateAcrossSessions(
    ITEMS,
    (item) => {
      console.log(`Hello world ${item}`);
    }, 
    "execute");
}

它會自動為 ITEMS 中的每個值執行內部 lambda,根據需要無縫地跨會話傳播。

例如,如果您使用 0 秒的 maxRunningTime,它將運行 4 個會話,輸出如下:

[iterateAcrossSessions] Executing for i = 0.
Hello world A
[iterateAcrossSessions] Creating new session for i = 1.
[iterateAcrossSessions] Executing for i = 1.
Hello world B
[iterateAcrossSessions] Creating new session for i = 2.
[iterateAcrossSessions] Executing for i = 2.
Hello world C
[iterateAcrossSessions] Creating new session for i = 3.
[iterateAcrossSessions] Done iterating over items.

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM