I have duplicated files in directory on Linux machine which are listed like that:
ltulikowski@lukasz-pc:~$ ls -1
abcd
abcd.1
abcd.2
abdc
abdc.1
acbd
I want to remove all files witch aren't single so as a result I should have:
ltulikowski@lukasz-pc:~$ ls -1
acbd
here is one way to do it
for f in *.[0-9]; do rm ${f%.*}*; done
may get exceptions since some files will be deleted more than once (abcd in your example). If versions always start with .1 you can restrict to match to that.
The function uses extglob , so before execution, set extglob : shopt -s extglob
rm_if_dup_exist(){
arr=()
for file in *.+([0-9]);do
base=${file%.*};
if [[ -e $base ]]; then
arr+=("$base" "$file")
fi
done
rm -f -- "${arr[@]}"
}
This will also support file names with several digits after the .
eg abcd. 250 is also acceptable.
Usage example with your input:
$ touch abcd abcd.1 abcd.2 abdc abdc.1 acbd
$ rm_if_dup_exist
$ ls
acbd
Please notice that if, for example, abcd.1 exist but abcd does not exist, it won't delete abcd.1 .
You can use:
while read -r f; do
rm "$f"*
done < <(printf "%s\n" * | cut -d. -f1 | uniq -d)
printf
, cut
and uniq
are used to get duplicate entries (part before dot) in current directory.
The command
rm *.*
Should do the trick if I understand you correctly
Use ls
to confirm first
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.