简体   繁体   English

Databricks、dbutils、获取 Azure Data Lake gen 2 路径中所有子文件夹的文件计数和文件大小

[英]Databricks, dbutils, get filecount and filesize of all subfolders in Azure Data Lake gen 2 path

I'm coding in a Databricks notebook (pyspark) and trying to get the filecount and filesizes of all subfolders in a specific Azure Data Lake gen2 mount path using dbutils.我在 Databricks 笔记本 (pyspark) 中编码,并尝试使用 dbutils 获取特定 Azure Data Lake gen2 安装路径中所有子文件夹的文件数和文件大小。

I have code for it on a specific folder but I'm stuck on how to write the recursive part...我在特定文件夹中有它的代码,但我一直在研究如何编写递归部分......

How about this?这个怎么样?

def deep_ls(path: str):
    """List all files in base path recursively."""
    for x in dbutils.fs.ls(path):
        if x.path[-1] is not '/':
            yield x
        else:
            for y in deep_ls(x.path):
                yield y

Credits to学分

https://forums.databricks.com/questions/18932/listing-all-files-under-an-azure-data-lake-gen2-co.html https://forums.databricks.com/questions/18932/listing-all-files-under-an-azure-data-lake-gen2-co.html

https://gist.github.com/Menziess/bfcbea6a309e0990e8c296ce23125059 https://gist.github.com/Menziess/bfcbea6a309e0990e8c296ce23125059

Get the list of the files from directory, Print and get the count with the below code.从目录中获取文件列表,打印并使用以下代码获取计数。

def get_dir_content(ls_path):
  dir_paths = dbutils.fs.ls(ls_path)
  subdir_paths = [get_dir_content(p.path) for p in dir_paths if p.isDir() and p.path != ls_path]
  flat_subdir_paths = [p for subdir in subdir_paths for p in subdir]
  return list(map(lambda p: p.path, dir_paths)) + flat_subdir_paths

paths = get_dir_content('dbfs:/')

or或者

paths = get_dir_content('abfss://')

Below line prints the file names with path and number of files count at the end.下面一行打印文件名,最后是路径和文件数。

len([print(p) for p in paths])

if you want only count number of files use the below:如果您只想计算文件数,请使用以下命令:

len([p for p in paths])

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 Databricks /mnt 安装 Azure Data lake Gen2 - Mounting Azure Data lake Gen2 with Databricks /mnt Azure Data Lake Store 作为具有多个路径的 Databricks 中的外部表? - Azure Data Lake Store as EXTERNAL TABLE in Databricks with multiple PATH? 获取列表中数据湖 gen2 文件夹的所有内容 azure 突触工作区 - get all the contents of data lake gen2 folder in a list azure synapse workspace 如何使用 java sdk 在 azure 数据湖 gen1 中创建资源? - How to create resources in azure data lake gen1 with java sdk? 在集群 Spark Config 中为 Azure Databricks 设置数据湖连接 - Setting data lake connection in cluster Spark Config for Azure Databricks 使用 Elastic Stack 对驻留在 Azure Data Lake Storage Gen2 中的数据进行实时数据分析 - Realtime data analytics using Elastic Stack on data residing in Azure Data Lake Storage Gen2 Azure Data Lake Gen2 存储帐户 blob 与 adf 选择 - Azure Data Lake Gen2 Storage Account blob vs adf choice 将数据从本地 sql 服务器复制到 Azure Data Lake Storage Gen2 中的增量格式 - copy data from on premise sql server to delta format in Azure Data Lake Storage Gen2 用于解析 Azure Data Lake Storage Gen2 URI 的正则表达式,用于使用 Azurite 进行生产和测试 - Regex to parse Azure Data Lake Storage Gen2 URI for production and testing with Azurite 使用 Azure 数据工厂数据流将 CSV 文件下沉到 Azure Data Lake Gen2 时如何删除额外文件? - How to remove extra files when sinking CSV files to Azure Data Lake Gen2 with Azure Data Factory data flow?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM