简体   繁体   English

在Azure中编写(+ lock)/读取文件的Python脚本

[英]Python script to write(+lock) / read a file in Azure

Iam new to python programming and Azure. 我是python编程和Azure的新手。

I need to write a script which will be executed by 2 processes. 我需要编写一个将由2个进程执行的脚本。

The 2 processes will run the same python script. 这两个进程将运行相同的python脚本。 I know that Azure has storageAccounts to put some files in it, i've found this: https://docs.microsoft.com/en-us/python/api/azure-storage-file/azure.storage.file.fileservice.fileservice?view=azure-python 我知道Azure可以在其中存储一些文件的storageAccounts,我发现了这一点: https : //docs.microsoft.com/zh-cn/python/api/azure-storage-file/azure.storage.file.fileservice .fileservice?视图=天青-蟒

and: https://github.com/Azure/azure-storage-python 和: https : //github.com/Azure/azure-storage-python

Here is some pseudo code to illustrate what i need to achieve: 这是一些伪代码来说明我需要实现的目标:

function useStorageFile
   if(fileFromStorage == null)
      createFileInStorage lockFileInStorage;
      executeDockerCommand;
      writeResultOFCommandInStorageFile;
   else
      if(fileFromStorage != null)
        X:if(fileFromStorage.status !== 'locked')
           readResultFromFile
        else
           wait 1s;
           continue X;

Is it possible to lock/unlock a file in Azure? 是否可以在Azure中锁定/解锁文件? How can i achieve that in python for example? 例如,如何在python中实现呢? thank you. 谢谢。

EDIT I have managed to write a file in Blob Storage with a python script. 编辑我已经设法用python脚本在Blob存储中写入文件。 The question now is: How can i lock the file while writing the command result in it by the first process, and make it read by the second process as soon as the Blob Storage lock (if the option exists...) is released by the first process ? 现在的问题是:如何在第一个进程写入命令结果的同时锁定文件,并在由Blob存储锁(如果存在该选项...)释放后立即让第二个进程读取文件第一个过程? here is the python script iam using : 这是iam使用的python脚本:

import os, uuid, sys
from azure.storage.blob import BlockBlobService, PublicAccess

def run_sample():
    try:
        # Create the BlockBlockService that is used to call the Blob service for the storage account
        block_blob_service = BlockBlobService(account_name='xxxxxx', account_key='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')

        # Create a container called 'quickstartblobs'.
        container_name ='quickstartblobs'
        block_blob_service.create_container(container_name)

        # Set the permission so the blobs are public.
        block_blob_service.set_container_acl(container_name, public_access=PublicAccess.Container)

        # Create a file in Documents to test the upload and download.
        local_path=os.path.abspath(os.path.curdir)
        local_file_name ='youss.txt'
        full_path_to_file =os.path.join(local_path, local_file_name)

        # Write text to the file.
        file = open(full_path_to_file,  'w')
        file.write("Hello, World!")
        file.close()

        print("Temp file = " + full_path_to_file)
        print("\nUploading to Blob storage as blob" + local_file_name)

        # Upload the created file, use local_file_name for the blob name
        block_blob_service.create_blob_from_path(container_name, local_file_name, full_path_to_file)

        # List the blobs in the container
        print("\nList blobs in the container")
        generator = block_blob_service.list_blobs(container_name)
        for blob in generator:
            print("\t Blob name: " + blob.name)

        # Download the blob(s).
        # Add '_DOWNLOADED' as prefix to '.txt' so you can see both files in Documents.
        full_path_to_file2 = os.path.join(local_path, str.replace(local_file_name ,'.txt', '_DOWNLOADED.txt'))
        print("\nDownloading blob to " + full_path_to_file2)
        block_blob_service.get_blob_to_path(container_name, local_file_name, full_path_to_file2)

        sys.stdout.write("Sample finished running. When you hit <any key>, the sample will be deleted and the sample "
                         "application will exit.")
        sys.stdout.flush()
        input()

        # Clean up resources. This includes the container and the temp files
        block_blob_service.delete_container(container_name)
        os.remove(full_path_to_file)
        os.remove(full_path_to_file2)
    except Exception as e:
        print(e)


# Main method.
if __name__ == '__main__':
    run_sample()

How can i lock the file while writing the command result in it by the first process, and make it read by the second process as soon as the Blob Storage lock (if the option exists...) is released by the first process ? 我如何在第一个进程写入命令结果的同时锁定文件,并在第一个进程释放Blob存储锁(如果存在该选项...)后立即让第二个进程读取文件?

Azure Blob Storage has a feature called Lease that you can make use of. Azure Blob存储具有一项可以称为“ Lease的功能。 Essentially Leasing process acquires an exclusive lock on the resource (blob in your case) and only one process can acquire lease on a blob. 从本质上讲, Leasing流程会获得对资源的独占锁定(在您的情况下为blob),并且只有一个流程可以在blob上获得租约。 Once lease is acquired on the blob, any other process can't modify or delete the blob. 在Blob上获得租约后,任何其他进程都无法修改或删除Blob。

So what you would need to do is try to acquire a lease on the blob before writing. 因此,您需要做的是尝试在写入之前在Blob上获取租约。 If the blob is already leased you will get an error back (HTTP Status Code 412, PreConditionFailed error). 如果Blob已被租借,则会返回一个错误(HTTP状态码412,PreConditionFailed错误)。 Assuming you don't get an error, you can continue with updating the file. 假设您没有收到错误,则可以继续更新文件。 Once the file is updated, you can either manually release the lock (either break lease or release lease) or let the lease auto expire. 文件更新后,您可以手动释放锁定(中断租约或释放租约),也可以让租约自动过期。 Assuming you get an error, you should wait and fetch the blob's lease status periodically (say every 5 seconds). 假设您遇到错误,则应等待并定期(例如每5秒一次)获取Blob的租约状态。 Once you find that the blob is not leased anymore, you can read the blob's contents. 一旦发现不再租用该Blob,就可以阅读该Blob的内容。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM