简体   繁体   中英

Bash script to find files in folders

I have a couple of folders as

Main/  
   /a  
   /b  
   /c    
 ..

I have to pass input file abc1.txt , abc2.txt from each of these folders respectively as an input file to my python program. The script right now is,

for i in `cat file.list`
do
echo $i
cd $i
#works on the assumption that there is only one .txt file
inputfile=`ls | grep .txt`
echo $inputfile
python2.7 ../getDOC.py $inputfile
sleep 10
cd ..
done
echo "Script executed successfully"

So I want the script to work correctly regardless of number of .txt files.

Can anyone let me know if there is any inbuilt command in shell to fetch the correct .txt files in case for multiple .txt files?

The find command is well suited for this with -exec :

find /path/to/Main -type f -name "*.txt" -exec python2.7 ../getDOC.py {} \; -exec sleep 10 \;

Explanation:

  • find - invoke find
  • /path/to/Main - The directory to start your search at. By default find searches recursively.
  • -type f - Only consider files (as opposed to directories, etc)
  • -name "*.txt" - Only find the files with .txt extension. This is quoted so bash doesn't auto-expand the wildcard * via globbing.
  • -exec ... \\; - For each such result found, run the following command on it:
  • python2.7 ../getDOC.py {}; - the {} part is where the search result from the find gets substituted into each time.
  • sleep 10 - sleep for 10 seconds after each time python script is run on the file. Remove this if you don't want it to sleep.

Better using globs :

shopt -s globstar nullglob
for i in Main/**/*txt; do
    python2.7 ../getDOC.py "$i"
    sleep 10
done

This example is recursive and require

find . -name *.txt | xargs python2.7 ../getDOC.py

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM