[英]Databricks, dbutils, get filecount and filesize of all subfolders in Azure Data Lake gen 2 path
I'm coding in a Databricks notebook (pyspark) and trying to get the filecount and filesizes of all subfolders in a specific Azure Data Lake gen2 mount path using dbutils.我在 Databricks 笔记本 (pyspark) 中编码,并尝试使用 dbutils 获取特定 Azure Data Lake gen2 安装路径中所有子文件夹的文件数和文件大小。
I have code for it on a specific folder but I'm stuck on how to write the recursive part...我在特定文件夹中有它的代码,但我一直在研究如何编写递归部分......
How about this?这个怎么样?
def deep_ls(path: str):
"""List all files in base path recursively."""
for x in dbutils.fs.ls(path):
if x.path[-1] is not '/':
yield x
else:
for y in deep_ls(x.path):
yield y
Credits to学分
https://forums.databricks.com/questions/18932/listing-all-files-under-an-azure-data-lake-gen2-co.html https://forums.databricks.com/questions/18932/listing-all-files-under-an-azure-data-lake-gen2-co.html
https://gist.github.com/Menziess/bfcbea6a309e0990e8c296ce23125059 https://gist.github.com/Menziess/bfcbea6a309e0990e8c296ce23125059
Get the list of the files from directory, Print and get the count with the below code.从目录中获取文件列表,打印并使用以下代码获取计数。
def get_dir_content(ls_path):
dir_paths = dbutils.fs.ls(ls_path)
subdir_paths = [get_dir_content(p.path) for p in dir_paths if p.isDir() and p.path != ls_path]
flat_subdir_paths = [p for subdir in subdir_paths for p in subdir]
return list(map(lambda p: p.path, dir_paths)) + flat_subdir_paths
paths = get_dir_content('dbfs:/')
or或者
paths = get_dir_content('abfss://')
Below line prints the file names with path and number of files count at the end.下面一行打印文件名,最后是路径和文件数。
len([print(p) for p in paths])
if you want only count number of files use the below:如果您只想计算文件数,请使用以下命令:
len([p for p in paths])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.