[英]Python Azure function triggered by Blob storage printing file name incorrectly
I'm triggering an Azure function with a blob trigger event.我正在使用 blob 触发事件触发 Azure function。 A container sample-workitems has a file base.csv and receives a new file new.csv .
容器sample-workitems有一个文件base.csv并接收一个新文件new.csv 。 I'm reading base.csv from the sample-workitems and new.csv from InputStream for the same container.
我正在从样本工作项中读取base.csv ,并从 InputStream 中读取同一容器的new.csv 。
def main(myblob: func.InputStream, base: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
logging.info(f"Base file info \n"
f"Name: {base.name}\n"
f"Blob Size: {base.length} bytes")
df_base = pd.read_csv(BytesIO(base.read()))
df_new = pd.read_csv(BytesIO(myblob.read()))
print(df_new.head())
print("prnting base dataframe")
print(df_base.head())
Output: Output:
Python blob trigger function processed blob
Name: samples-workitems/new.csv
Blob Size: None bytes
Base file info
Name: samples-workitems/new.csv
Blob Size: None bytes
first 5 rows of df_new (cannot show data here)
prnting base dataframe
first 5 rows of df_base (cannot show data here)
Even though both files show their own content when printing but myblob.name
and base.name
has the same value new.csv which is unexpected.尽管这两个文件在打印时都显示了自己的内容,但
myblob.name
和base.name
具有相同的值new.csv ,这是出乎意料的。 myblob.name
would have new.csv while base.name
would have base.csv myblob.name
会有new.csv而base.name
会有base.csv
function.json function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection": "my_storage"
},
{
"type": "blob",
"name": "base",
"path": "samples-workitems/base.csv",
"connection": "my_storage",
"direction": "in"
}
]
}
I have reproduced in my environment and the below code worked for me and I followed code of @SwethaKandikonda 's SO-thread我已经在我的环境中进行了复制,下面的代码对我有用,我遵循了@SwethaKandikonda 的SO-thread的代码
init .py:初始化.py:
import logging
from azure.storage.blob import BlockBlobService
import azure.functions as func
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
file=""
fileContent=""
blob_service = BlockBlobService(account_name="rithwikstor",account_key="EJ7xCyq2+AStqiar7Q==")
containername="samples-workitems"
generator = blob_service.list_blobs(container_name=containername)
for blob in generator:
file=blob_service.get_blob_to_text(containername,blob.name)
logging.info(blob.name)
logging.info(file.content)
fileContent+=blob.name+'\n'+file.content+'\n\n'
function.json: function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection": "rithwikstor_STORAGE"
}
]
}
local.settings.json: local.settings.json:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "Connection String Of storage account",
"FUNCTIONS_WORKER_RUNTIME": "python",
"rithwikstor_STORAGE": "Connection String Of storage account"
}
}
Host.json:主机.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0)"
},
"concurrency": {
"dynamicConcurrencyEnabled": true,
"snapshotPersistenceEnabled": true
}
}
Then i added blobs as below:然后我添加了如下的斑点:
Output: Output:
Please try to file the above process and code, you will get correct output as I have got.请尝试将上述过程和代码归档,您将得到正确的 output,就像我得到的那样。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.