I am running a simulation study and need to process and save the results from several text files. I have the data organized in such a way where there are sub directories and within each sub directory, I need to process and get individual results for 1000 data files. This is very easy to do in SAS using macros. However, I am new to R and cannot figure out how to do such. Below is what I am trying to accomplish.
DATA Folder-> DC1 -> DC1R1.txt ... DC1R1000.txt
DC2 -> DC2R1.txt ... DC2R1000.txt
Any help would be greatly appreciated!
I'm not near a computer with R right now, but read the help for file-related functions:
The dir
function will list the files and directories. It has a recursive argument. list.files
is an alias for dir
. The file.info
function will tell you (among other things) if a path is a directory and file.path
will combine path parts.
The basename
and dirname
functions might also be useful.
Note that all these functions are vectorized.
EDIT Now at a computer, so here's an example:
# Make a function to process each file
processFile <- function(f) {
df <- read.csv(f)
# ...and do stuff...
file.info(f)$size # dummy result
}
# Find all .csv files
files <- dir("/foo/bar/", recursive=TRUE, full.names=TRUE, pattern="\\.csv$")
# Apply the function to all files.
result <- sapply(files, processFile)
If you need to run the same analysis on each of the files, then you can access them in one shot using list.files(recursive = T)
. This is assuming that you have already set your working directory to Data Folder
. The recursive = T
lists all files within subdirectories as well.
filenames <- list.files("path/to/files", recursive=TRUE)
这将为您提供位于一个文件夹下的所有文件及其下的子文件夹。
您可以使用Perl的glob ()
函数获取文件列表,并使用例如RSPerl的接口将其发送到R.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.