簡體   English   中英

Azure-403錯誤-通過即用即付帳戶將大量存儲帳戶Blob復制到另一個存儲帳戶

[英]Azure - 403 error - Copying a large number of storage account blobs to another storage account on pay-as-you-go acct

我正在

403禁止

將大量塊Blob從一個存儲帳戶復制到另一個存儲帳戶(在不同區域作為備份)時出錯。 復制100,000+后,出現403禁止錯誤。

我已經看到有關配額的答案,但我相信這是針對免費帳戶的。 我有一個帶有578,000個文件的客戶端,該客戶端已從本地移到Azure,並且工作正常,但是我無法將其復制到設置為用作備份的另一個存儲帳戶中(大多數情況下是刪除操作)。

我正在使用StartCopyAsync ,然后檢查Copystate狀態以驗證復制是否成功,然后在我的代碼中重試,但在StartCopyAsync上似乎失敗了。

在我復制完超過100,000個文件之前,該復制工作正常,然后發生錯誤。 我不確定是什么原因造成的,因為相同的代碼首先可以對那么多blob正常工作。 我添加了一個日志文件,該文件告訴我失敗的文件,並且可以在Azure資源管理器中打開該文件。

我可以發布代碼,但是現在,我想知道是否遇到某種我不知道的報價/帶寬問題。

    namespace BackupCloudContainers
    {

    class Program
    {
        static string privateconnectionstring = ConfigurationManager.AppSettings["StorageConnectionString"];
        static string privatebackupconnectionstring = ConfigurationManager.AppSettings["BackupStorageConnectionString"];
        static DateTime testdate = new DateTime(2017, 8, 28, 0, 0, 0);
        static string destContainerName = "";
        static void Main(string[] args)
        {
            try
            {
                //Console.WriteLine("Starting Backup at " + DateTime.Now.ToString("hh:mm:ss.ffff"));
                Log("Starting Incremental Backup (everything since " + testdate.ToString("f") + ") at " + DateTime.Now.ToString("hh:mm:ss.ffff"));
                Backup().GetAwaiter().GetResult();
                // Console.WriteLine("Backup Created as " + destContainerName);
                Log("Backup Created as " + destContainerName);
                //Console.WriteLine("Backup ended at " + DateTime.Now.ToString("hh:mm:ss.ffff"));
                Log("Backup ended at " + DateTime.Now.ToString("hh:mm:ss.ffff"));
                Console.WriteLine("\n\nPress Enter to close. ");
                Console.ReadLine();
            }
            catch (Exception e)
            {
                //Console.WriteLine("Exception - " + e.Message);
                Log("Exception - " + e.Message);
                if (e.InnerException != null)
                {
                    //Console.WriteLine("Inner Exception - " + e.InnerException.Message);
                    Log("Inner Exception - " + e.InnerException.Message);
                }
            }
        }
        static async Task Backup()
        {

            CloudStorageAccount _storageAccount = CloudStorageAccount.Parse(privateconnectionstring);
            CloudStorageAccount _storageBackupAccount = CloudStorageAccount.Parse(privatebackupconnectionstring);

            CloudBlobClient blobClient = _storageAccount.CreateCloudBlobClient();
            CloudBlobClient blobBackupClient = _storageBackupAccount.CreateCloudBlobClient();

            foreach (var srcContainer in blobClient.ListContainers())
            {
                // skip any containers with a backup name
                if (srcContainer.Name.IndexOf("-backup-") > -1)
                {
                    continue;
                }
                var backupTimeInTicks = DateTime.UtcNow.Ticks;
                //var destContainerName = srcContainer.Name + "-" + backupTimeInTicks;
                var backupDateTime = DateTime.UtcNow.ToString("yyyyMMdd-hhmmssfff");
                destContainerName = srcContainer.Name + "-backup-" + backupDateTime;

                var destContainer = blobBackupClient.GetContainerReference(destContainerName);
//                var destContainer = blobClient.GetContainerReference(destContainerName);

                // assume it does not exist already,
                // as that wouldn't make sense.
                await destContainer.CreateAsync();

                // ensure that the container is not accessible
                // to the outside world,
                // as we want all the backups to be internal.
                BlobContainerPermissions destContainerPermissions = destContainer.GetPermissions();
                if (destContainerPermissions.PublicAccess != BlobContainerPublicAccessType.Off)
                {
                    destContainerPermissions.PublicAccess = BlobContainerPublicAccessType.Off;
                    await destContainer.SetPermissionsAsync(destContainerPermissions);
                }

                // copy src container to dest container,
                // note that this is synchronous operation in reality,
                // as I want to only add real metadata to container
                // once all the blobs have been copied successfully.
                await CopyContainers(srcContainer, destContainer);
                await EnsureCopySucceeded(destContainer);

                // ensure we have some metadata for the container
                // as this will helps us to delete older containers
                // on a later date.
                await destContainer.FetchAttributesAsync();

                var destContainerMetadata = destContainer.Metadata;
                if (!destContainerMetadata.ContainsKey("BackupOf"))
                {
                    string cname = srcContainer.Name.ToLowerInvariant();
                    destContainerMetadata.Add("BackupOf", cname);
                    destContainerMetadata.Add("CreatedAt", backupTimeInTicks.ToString());
                    destContainerMetadata.Add("CreatedDate", backupDateTime);
                    await destContainer.SetMetadataAsync();
                    //destContainer.SetMetadata();
                }
            }

            // let's purge the older containers,
            // if we already have multiple newer backups of them.
            // why keep them around.
            // just asking for trouble.
            //var blobGroupedContainers = blobBackupClient.ListContainers()
            //    .Where(container => container.Metadata.ContainsKey("Backup-Of"))
            //    .Select(container => new
            //    {
            //        Container = container,
            //        BackupOf = container.Metadata["Backup-Of"],
            //        CreatedAt = new DateTime(long.Parse(container.Metadata["Created-At"]))
            //    }).GroupBy(arg => arg.BackupOf);

            var blobGroupedContainers = blobClient.ListContainers()
                .Where(container => container.Metadata.ContainsKey("BackupOf"))
                .Select(container => new
                {
                    Container = container,
                    BackupOf = container.Metadata["BackupOf"],
                    CreatedAt = new DateTime(long.Parse(container.Metadata["CreatedAt"]))
                }).GroupBy(arg => arg.BackupOf);

            // Remove the Delete for now
      //      foreach (var blobGroupedContainer in blobGroupedContainers)
      //      {
      //          var containersToDelete = blobGroupedContainer.Select(arg => new
      //          {
      //              Container = arg.Container,
      //              CreatedAt = new DateTime(arg.CreatedAt.Year, arg.CreatedAt.Month, arg.CreatedAt.Day)
      //          })
      //              .GroupBy(arg => arg.CreatedAt)
      //              .OrderByDescending(grouping => grouping.Key)
      //              .Skip(7) /* skip last 7 days worth of data */
      //              .SelectMany(grouping => grouping)
      //              .Select(arg => arg.Container);

      //// Remove the Delete for now
      //          //foreach (var containerToDelete in containersToDelete)
      //          //{
      //          //    await containerToDelete.DeleteIfExistsAsync();
      //          //}
      //      }
        }

        static async Task EnsureCopySucceeded(CloudBlobContainer destContainer)
        {
            bool pendingCopy = true;
            var retryCountLookup = new Dictionary<string, int>();

            while (pendingCopy)
            {
                pendingCopy = false;

                var destBlobList = destContainer.ListBlobs(null, true, BlobListingDetails.Copy);

                foreach (var dest in destBlobList)
                {
                    var destBlob = dest as CloudBlob;
                    if (destBlob == null)
                    {
                        continue;
                    }

                    var blobIdentifier = destBlob.Name;

                    if (destBlob.CopyState.Status == CopyStatus.Aborted ||
                        destBlob.CopyState.Status == CopyStatus.Failed)
                    {
                        int retryCount;
                        if (retryCountLookup.TryGetValue(blobIdentifier, out retryCount))
                        {
                            if (retryCount > 4)
                            {
                                throw new Exception("[CRITICAL] Failed to copy '"
                                                        + destBlob.CopyState.Source.AbsolutePath + "' to '"
                                                        + destBlob.StorageUri + "' due to reason of: " +
                                                        destBlob.CopyState.StatusDescription);
                            }

                            retryCountLookup[blobIdentifier] = retryCount + 1;
                        }
                        else
                        {
                            retryCountLookup[blobIdentifier] = 1;
                        }

                        pendingCopy = true;

                        // restart the copy process for src and dest blobs.
                        // note we also have retry count protection,
                        // so if any of the blobs fail too much,
                        // we'll give up.
                        await destBlob.StartCopyAsync(destBlob.CopyState.Source);
                    }
                    else if (destBlob.CopyState.Status == CopyStatus.Pending)
                    {
                        pendingCopy = true;
                    }
                }

                Thread.Sleep(1000);
            }
        }

        static async Task CopyContainers(
                CloudBlobContainer srcContainer,
                CloudBlobContainer destContainer)
        {
            // get the SAS token to use for all blobs
            string blobToken = srcContainer.GetSharedAccessSignature(new SharedAccessBlobPolicy()
            {
                Permissions = SharedAccessBlobPermissions.Read,
                SharedAccessStartTime = DateTime.Now.AddMinutes(-5),
                SharedAccessExpiryTime = DateTime.Now.AddHours(3)
            });
            int ii = 0;
            int cntr = 0;
            int waitcntr = 0;
            string sourceuri = "";
            int datecntr = 0;
            try
            {

                //Console.WriteLine("  container contains " + srcContainer.ListBlobs(null, true).Count().ToString());
                Log("  container contains " + srcContainer.ListBlobs(null, true).Count().ToString());
                foreach (var srcBlob in srcContainer.ListBlobs(null, true))
                {
                    ii++;

                    //THIS IS FOR COUNTING Blobs that would be on the Incremental Backup
                    CloudBlob blob = (CloudBlob)srcBlob;
                    if (blob.Properties.LastModified > testdate)
                    {
                        datecntr++;
                    }
                    else
                    {
                        // We are only doing an Incremental Backup this time - so skip all other files 
                        continue;
                    }


                    //if (ii > 2000)
                    //{
                    //    //Console.WriteLine("   test run ended ");
                    //    Log("   test run ended ");
                    //    break;
                    //}


                    cntr++;
                    if (cntr > 999)
                    {
                        //Console.WriteLine("    " + ii.ToString() + " processed at " + DateTime.Now.ToString("hh:mm:ss"));
                        Log("    " + ii.ToString() + " processed at " + DateTime.Now.ToString("hh:mm:ss"));

                        //Log("   EnsureCopySucceeded - finished at " + DateTime.Now.ToString("hh:mm:ss"));
                        //await EnsureCopySucceeded(destContainer);
                        //Log("   EnsureCopySucceeded - finished at " + DateTime.Now.ToString("hh:mm:ss"));

                        cntr = 0;

                    }

                    waitcntr++;
                    if (waitcntr > 29999)
                    {
                        Log("   EnsureCopySucceeded (ii=" + ii.ToString() + "- started at " + DateTime.Now.ToString("hh:mm:ss"));
                        await EnsureCopySucceeded(destContainer);
                        Log("   EnsureCopySucceeded - finished at " + DateTime.Now.ToString("hh:mm:ss"));
                        waitcntr = 0;
                    }


                    var srcCloudBlob = srcBlob as CloudBlob;
                    if (srcCloudBlob == null)
                    {
                        continue;
                    }

                    CloudBlob destCloudBlob;

                    if (srcCloudBlob.Properties.BlobType == BlobType.BlockBlob)
                    {
                        destCloudBlob = destContainer.GetBlockBlobReference(srcCloudBlob.Name);
                    }
                    else
                    {
                        destCloudBlob = destContainer.GetPageBlobReference(srcCloudBlob.Name);
                    }
                    sourceuri = srcCloudBlob.Uri.AbsoluteUri + blobToken;

                    try
                    {
                        await destCloudBlob.StartCopyAsync(new Uri(srcCloudBlob.Uri.AbsoluteUri + blobToken));
                    }
                    catch (Exception e)
                    {
                        Log("Error at item " + ii.ToString() + "      Source = " + sourceuri + "      Message = " + e.Message + "      Time = " + DateTime.Now.ToString("F") + "\r\n");
                    }
                }
                Log("Total Items checked = " + ii.ToString() + "    backed up files = " + datecntr.ToString());
                Log("TestDate = " + testdate.ToString("F") + "     datecntr = " + datecntr.ToString());
            }
            catch (Exception e)
            {
                Log("Error at item " + ii.ToString());
                Log("      Source = " + sourceuri);
                Log("      Message = " + e.Message);
                Log("      Time = " + DateTime.Now.ToString("F") + "\r\n");
                //throw e; 
            }
        }


        static void Log(string logdata)
        {
            Console.WriteLine(logdata);
            File.AppendAllText("c:\\junk\\dwlog.txt", logdata + "\r\n");
        }
    }
    }

您提到您的代碼在3小時后開始失敗。 好吧,以下幾行代碼是造成這一問題的罪魁禍首:

    string blobToken = srcContainer.GetSharedAccessSignature(new SharedAccessBlobPolicy()
    {
        Permissions = SharedAccessBlobPermissions.Read,
        SharedAccessStartTime = DateTime.Now.AddMinutes(-5),
        SharedAccessExpiryTime = DateTime.Now.AddHours(3)
    });

如果您注意到,您將創建一個有效期為3個小時的共享訪問簽名(SAS),並將此SAS用於所有Blob。 只要SAS有效(即尚未過期),您的代碼就可以工作。 SAS過期后,由於現在未授權SAS令牌執行該操作,因此您開始收到403 (Not Authorized)錯誤。

我的建議是創建一個有效期更長的SAS令牌。 我建議使用一個有效期為15 days的SAS令牌,因為這是Azure存儲將嘗試將您的Blob從一個帳戶復制到另一個帳戶的最大時間。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM