[英]How to loop through a file folder to run script for each item in folder?
I have a script that takes a sample from an excel file and spits back out that sample as a csv.我有一个脚本,它从一个 excel 文件中获取一个样本,然后将该样本作为 csv 输出。 How would one go about looping through a file folder with multiple excel files to avoid the task of changing the file for every run of the script?如何遍历包含多个 excel 文件的文件夹以避免每次运行脚本时更改文件的任务? I believe I can use glob, but that appears to merely merge all the excel files together.我相信我可以使用 glob,但这似乎只是将所有 excel 文件合并在一起。
import pandas as pd
import glob
root_dir = r"C:\Users\bryanmccormack\Desktop\Test_Folder\*.xlsx"
excel_files = glob.glob(root_dir, recursive=True)
for xls in excel_files:
df_excel = pd.read_excel(xls)
df_excel = df_excel.loc[(df_excel['Track Item']=='Y')]
def sample_per(df_excel):
if len(df_excel) <= 10000:
return df_excel.sample(frac=0.05)
elif len(df_excel) >= 15000:
return df_excel.sample(frac=0.03)
else:
return df_excel.sample(frac=0.01)
final = sample_per(xls)
df_excel.loc[df_excel['Retailer Item ID'].isin(final['Retailer Item ID']), 'Track Item'] = 'Audit'
df_excel.to_csv('Testicle.csv',index=False)
This returns a list of all files in a directory that you can iterate on:这将返回您可以迭代的目录中所有文件的列表:
from os import walk
from os.path import join
def retrieve_file_paths(dirName): #Declare the function to return all file paths of the particular directory
filepaths = [] #setup file paths variable
for root, directories, files in walk(dirName): #Read all directory, subdirectories and file lists
for filename in files:
filepath = join(root, filename) #Create the full filepath by using os module.
filepaths.append(filepath)
return filepaths #return all paths
at the end it should look something on this line:最后它应该看起来像这条线:
import pandas as pd
from os import walk
from os.path import join
dirName = "/your/dir"
def sample_per(df2):
if len(df2) <= 10000:
return df2.sample(frac=0.05)
elif len(df2) >= 15000:
return df2.sample(frac=0.03)
else:
return df2.sample(frac=0.01)
def retrieve_file_paths(dirName): #Declare the function to return all file paths of the particular directory
filepaths = [] #setup file paths variable
for root, directories, files in walk(dirName): #Read all directory, subdirectories and file lists
for filename in files:
filepath = join(root, filename) #Create the full filepath by using os module.
filepaths.append(filepath)
return filepaths #return all paths
def main():
global dirName
for filepath in retrieve_file_paths(dirName):
df = pd.read_excel(r+filepath)
df2 = df.loc[(df['Track Item']=='Y')]
final = sample_per(df2)
df.loc[df['Retailer Item ID'].isin(final['Retailer Item ID']), 'Track Item'] = 'Audit'
df.to_csv('Test.csv',index=False)
if __name__ == '__main__':
main()
You were on the right track but using pd.concat() was 'responsible for merging your excel files.您在正确的轨道上,但使用 pd.concat() 是“负责合并您的 excel 文件”。 This snippet should help you:这个片段应该可以帮助你:
import pandas as pd
import glob
# use regex style to get all files with xlsx extension
root_dir = r"excel/*.xlsx"
# this call of glob only gives xlsx files in the root_dir
excel_files = glob.glob(root_dir)
# iterate over the files
for xls in excel_files:
# read
df_excel = pd.read_excel(xls)
# manipulate as you wish here
df_new = df_excel.sample(frac=0.1)
# store
df_new.to_csv(xls.replace("xlsx", "csv"))
Note you can also pass recursive=True in the glob call which gives you (from python 3+ I believe) all excel files from the subdirectories.请注意,您还可以在 glob 调用中传递recursive=True ,它为您提供(我相信来自 python 3+)子目录中的所有 excel 文件。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.