简体   繁体   English

如何将镶木地板文件上传到 Azure ADLS 2 Blob

[英]how to upload a parquet file into Azure ADLS 2 Blob

Hi i want to upload Parquet files to ADLS gen 2 blob.嗨,我想将 Parquet 文件上传到 ADLS gen 2 blob。 I am using below line of code to create blob and upload parquet file in it.我正在使用下面的代码行来创建 blob 并在其中上传 parquet 文件。

blob = BlobClient.from_connection_string(conn_str="Connection String", container_name="parquet", blob_name=outdir)
df.to_parquet('logs.parquet',compression='GZIP') #df is dataframe
with open("./logs.parquet", "rb") as data:
blob.upload_blob(data)
os.remove("logs.parquet")

i do not face any error and the files are also written in the blob.我没有遇到任何错误,文件也写在 blob 中。 But, i don't think i am doing it right as ADX/kusto query can not understand the file and no data is visible there.但是,我不认为我做对了,因为 ADX/kusto 查询无法理解该文件并且那里没有数据可见。

Below are the steps i performed in Azure Data Explorer to fetch records from uploaded parquet file in ADLS gen 2.以下是我在 Azure 数据资源管理器中执行的步骤,以从 ADLS gen 2 中上传的 parquet 文件中获取记录。

Created External Table:创建的外部表:

.create external table LogDataParquet(AppId_s:string,UserId_g:string,Email_s:string,RoleName_s:string,Operation_s:string,EntityId_s:string,EntityType_s:string,EntityName_s:string,TargetTitle_s:string,TimeGenerated:datetime) 
kind=blob
dataformat=parquet
( 
   h@'https://streamoutalds2.blob.core.windows.net/stream-api-raw-testing;secret'
)
with 
(
   folder = "ExternalTables"   
)

External Table Column Mapping:外部表列映射:

.create external table LogDataParquet parquet mapping "LogDataMapparquet" '[{ "column" : "AppId_s", "path" : "$.AppId_s"},{ "column" : "UserId_g", "path" : "$"},{ "column" : "Email_s", "path" : "$.Email_s"},{ "column" : "RoleName_s", "path" : "$.RoleName_s"},{ "column" : "Operation_s", "path" : "$.Operation_s"},{ "column" : "EntityId_s", "path" : "$.EntityId_s"}]'

External Tables Gives no records外部表不提供记录

external_table('LogDataParquet')

No records没有记录

external_table('LogDataParquet') | count 

1 record - count 0 1 条记录 - 计数 0

I have used similar scenario using stream analytics, where i receive incoming streams and save it in parquet format to ADLS.我使用 stream 分析使用了类似的场景,我接收传入的流并将其以镶木地板格式保存到 ADLS。 External table in ADX fetches records well in that case.在这种情况下,ADX 中的外部表可以很好地获取记录。 I feel i am making mistake in the way parquet files are written in blob - (with open("./logs.parquet", "rb") as data: )我觉得我在用 blob 编写镶木地板文件的方式上犯了错误 - (使用 open("./logs.parquet", "rb") 作为数据:)

According to logs, external table was defined as follows:根据日志,外部表定义如下:

.create external table LogDataParquet(AppId_s:string,UserId_g:string,Email_s:string,RoleName_s:string,Operation_s:string,EntityId_s:string,EntityType_s:string,EntityName_s:string,TargetTitle_s:string,TimeGenerated:datetime) 
kind=blob
partition by 
   AppId_s,
   bin(TimeGenerated,1d)
dataformat=parquet
( 
   '******'
)
with 
(
   folder = "ExternalTables"   
)

The PARTITION BY clause tells ADX that the expected folder layout is: PARTITION BY子句告诉 ADX 预期的文件夹布局是:

<AppId_s>/<TimeGenerated, formatted as 'yyyy/MM/dd'>

For example:例如:

https://streamoutalds2.blob.core.windows.net/stream-api-raw-testing;secret/SuperApp/2020/01/31

You can find more info on how ADX locates files on external storage during a query, in this section: https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/external-tables-azurestorage-azuredatalake#artifact-filtering-logic您可以在本节中找到有关 ADX 如何在查询期间在外部存储上定位文件的更多信息: https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/external-tables-azurestorage -azuredatalake#artifact-filtering-logic

To fix the external table definition in accordance to folders layout, please use .alter command:要根据文件夹布局修复外部表定义,请使用.alter命令:

.alter external table LogDataParquet(AppId_s:string,UserId_g:string,Email_s:string,RoleName_s:string,Operation_s:string,EntityId_s:string,EntityType_s:string,EntityName_s:string,TargetTitle_s:string,TimeGenerated:datetime) 
kind=blob
dataformat=parquet
( 
  h@'https://streamoutalds2.blob.core.windows.net/stream-api-raw-testing;secret'
)
with 
(
   folder = "ExternalTables"   
)

BTW, if a mapping is naive (eg mapped column names match data source column names), then it's not needed for Parquet format.顺便说一句,如果映射是幼稚的(例如,映射的列名与数据源列名匹配),那么 Parquet 格式就不需要它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何从 Azure blob 存储中将镶木地板文件读入 pandas - How to read parquet file into pandas from Azure blob store 如何从 Azure ADLS blob 容器中读取.xpt 格式文件并转换为 csv 格式 - How to read .xpt format file from the Azure ADLS blob container and convert to csv format 无法在 azure blob 中上传文件 - unable to upload file in azure blob 如何从 Azure Python function blob 输入绑定中读取镶木地板文件? - How to read parquet file from Azure Python function blob input binding? 如何从 azure blob 存储中读取镶木地板文件(大尺寸 1 GB)而无需在本地计算机上下载 - how to read parquet file(large size 1 GB) from the azure blob storage without downloading in local machine 如何使用 Python 将文件上传到 ADLS Gen2 - How to upload a file to ADLS Gen2 using Python 暂停/恢复 azure blob 存储中的文件上传 - Pause/Resume the file upload in azure blob storage 在 azure blob 存储中设置文件上传限制 - Set file upload limit in azure blob storage 如何将URL中的图片保存到azure中的blob存储ADLS gen2 in python - how to save the picture in a URL into azure blob storage ADLS gen2 in python 如何将 .parquet 文件从本地计算机上传到 Azure Storage Data Lake Gen2? - How can I upload a .parquet file from my local machine to Azure Storage Data Lake Gen2?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM