I need to find files with *test*
in their name, get their path and save them to a file. Here is the code:
find . -name *test* | xargs -I % bash -x "echo ''; readlink -f % '';"
It's working correctly, but I don't know how to write results to a file. printf "above code" >> result.txt
is not working.
What do I need to change? Or is there any way to write this in a simpler way?
find . -name *test* | xargs -I % bash -x "echo ''; readlink -f % '';" > file.txt
The redirection is added to the end of the command, no need to try and wrap it in printf
or similar:
find . -name *test* | xargs -I % bash -x "echo ''; readlink -f % '';" >> result.txt
This doesn't work, however, as the quoting in the bash -x
command is off. I'm not sure what you're trying to achieve exactly, but I think it could be simplified to
find . -name '*test*' | xargs -I% readlink -f % >> result.txt
Notice that I've quoted '*test*'
to make sure it's not expanded by the shell before find
gets to see it.
Instead of find | xargs
find | xargs
, you could use find -exec
:
find . -name '*test*' -exec readlink -f {} + >> result.txt
This has the advantage of calling readlink
as few times as possible.
And finally, if you don't actually care about resolving symlinks (and hidden files 1 ), but just need canonical paths, you could do it without external tools:
shopt -s globstar # Requires Bash 4.0 or newer
printf "$PWD/%s\n" **/*test* >> result.txt
1 If you do care about hidden files, you can use shopt -s dotglob
to find them. If it is possible that there is no match at all, you can use shopt -s nullglob
to avoid writing **/*test*
literally to the result file.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.