简体   繁体   English

根据第二个文本文件从文本文件中删除重复项

[英]Remove duplicates from text file based on second text file

How can I remove all lines from a text file ( main.txt ) by checking a second textfile ( removethese.txt ).我怎样才能把一个文本文件(所有行main.txt通过检查第二个文本文件() removethese.txt )。 What is an efficient approach if files are greater than 10-100mb.如果文件大于 10-100mb,什么是有效的方法 [Using mac] [使用mac]

Example:例子:

main.txt
3
1
2
5

Remove these lines删除这些行

removethese.txt
3
2
9

Output:输出:

output.txt
1
5

Example Lines (these are the actual lines I'm working with - order does not matter):示例行(这些是我正在使用的实际行 - 顺序无关紧要):

ChIJW3p7Xz8YyIkRBD_TjKGJRS0
ChIJ08x-0kMayIkR5CcrF-xT6ZA
ChIJIxbjOykFyIkRzugZZ6tio1U
ChIJiaF4aOoEyIkR2c9WYapWDxM
ChIJ39HoPKDix4kRcfdIrxIVrqs
ChIJk5nEV8cHyIkRIhmxieR5ak8
ChIJs9INbrcfyIkRf0zLkA1NJEg
ChIJRycysg0cyIkRArqaCTwZ-E8
ChIJC8haxlUDyIkRfSfJOqwe698
ChIJxRVp80zpcEARAVmzvlCwA24
ChIJw8_LAaEEyIkR68nb8cpalSU
ChIJs35yqObit4kR05F4CXSHd_8
ChIJoRmgSdwGyIkRvLbhOE7xAHQ
ChIJaTtWBAWyVogRcpPDYK42-Nc
ChIJTUjGAqunVogR90Kc8hriW8c
ChIJN7P2NF8eVIgRwXdZeCjL5EQ
ChIJizGc0lsbVIgRDlIs85M5dBs
ChIJc8h6ZqccVIgR7u5aefJxjjc
ChIJ6YMOvOeYVogRjjCMCL6oQco
ChIJ54HcCsaeVogRIy9___RGZ6o
ChIJif92qn2YVogR87n0-9R5tLA
ChIJ0T5e1YaYVogRifrl7S_oeM8
ChIJwWGce4eYVogRcrfC5pvzNd4

There are two standard ways to do this:有两种标准方法可以做到这一点:

With grep :使用grep

grep -vxFf removethese main

This uses:这使用:

  • -v to invert the match. -v反转匹配。
  • -x match whole line, to prevent, for example, he to match lines like hello or highway to hell . -x匹配整行,以防止,例如, he匹配像hellohighway to hell
  • -F to use fixed strings, so that the parameter is taken as it is, not interpreted as a regular expression. -F使用固定字符串,以便参数按原样使用,而不是解释为正则表达式。
  • -f to get the patterns from another file. -f从另一个文件中获取模式。 In this case, from removethese .在这种情况下,从removethese .

With awk :使用awk

$ awk 'FNR==NR {a[$0];next} !($0 in a)' removethese main
1
5

Like this we store every line in removethese in an array a[] .像这样,我们将removethese中的每一行存储在一个数组a[] Then, we read the main file and just print those lines that are not present in the array.然后,我们读取main文件并打印数组中不存在的那些行。

With grep :使用grep

grep -vxFf removethese.txt main.txt >output.txt

With fgrep :使用fgrep

fgrep -vxf removethese.txt main.txt >output.txt

fgrep is deprecated. fgrep已弃用。 fgrep --help says: fgrep --help说:

Invocation as 'fgrep' is deprecated;不推荐使用“fgrep”调用; use 'grep -F' instead.改用'grep -F'。

With awk (from @fedorqui):使用awk (来自 @fedorqui):

awk 'FNR==NR {a[$0];next} !($0 in a)' removethese.txt main.txt >output.txt

With sed :使用sed

sed "s=^=/^=;s=$=$/d=" removethese.txt | sed -f- main.txt >output.txt

This will fail if removethese.txt contains special chars.如果removethese.txt包含特殊字符,这将失败。 For that you can do:为此,您可以这样做:

sed 's/[^^]/[&]/g; s/\^/\\^/g' removethese.txt >newremovethese.txt

and use this newremovethese.txt in the sed command.并在sed命令中使用这个newremovethese.txt But this is not worth the effort, it's too slow compared to the other methods.但这不值得付出努力,与其他方法相比,它太慢了。


Test performed on the above methods:对上述方法进行的测试:

The sed method takes too much time and not worth testing. sed方法花费太多时间,不值得测试。

Files Used:使用的文件:

removethese.txt : Size: 15191908 (15MB)     Blocks: 29672   Lines: 100233
main.txt : Size: 27640864 (27.6MB)      Blocks: 53992   Lines: 180034

Commands:命令:
grep -vxFf | grep -vxFf | fgrep -vxf | fgrep -vxf | awk

Taken Time:拍摄时间:
0m7.966s | 0m7.966s | 0m7.823s | 0m7.823s | 0m0.237s
0m7.877s | 0m7.877s | 0m7.889s | 0m7.889s | 0m0.241s
0m7.971s | 0m7.971s | 0m7.844s | 0m7.844s | 0m0.234s
0m7.864s | 0m7.864s | 0m7.840s | 0m7.840s | 0m0.251s
0m7.798s | 0m7.798s | 0m7.672s | 0m7.672s | 0m0.238s
0m7.793s | 0m7.793s | 0m8.013s | 0m8.013s | 0m0.241s

AVG平均收入
0m7.8782s | 0m7.8782s | 0m7.8468s | 0m7.8468s | 0m0.2403s

This test result implies that fgrep is a little bit faster than grep .这个测试结果意味着fgrepgrep快一点。

The awk method (from @fedorqui) passes the test with flying colors ( 0.2403 seconds only !!!). awk方法(来自@fedorqui)以0.2403 seconds通过了测试(仅0.2403 seconds !!!)。

Test Environment:测试环境:

HP ProBook 440 G1 Laptop
8GB RAM
2.5GHz processor with turbo boost upto 3.1GHz
RAM being used: 2.1GB
Swap being used: 588MB
RAM being used when the grep/fgrep command is run: 3.5GB
RAM being used when the awk command is run: 2.2GB or less
Swap being used when the commands are run: 588MB (No change)

Test Result:测试结果:

Use the awk method.使用awk方法。

Here are a lot of the simple and effective solutions I've found: http://www.catonmat.net/blog/set-operations-in-unix-shell-simplified/以下是我找到的许多简单有效的解决方案: http : //www.catonmat.net/blog/set-operations-in-unix-shell-simplified/

You need to use one of Set Complement bash commands.您需要使用Set Complement bash 命令之一。 100MB files can be solved within seconds or minutes. 100MB 的文件可以在几秒钟或几分钟内解决。

Set Membership设置会员

$ grep -xc 'element' set    # outputs 1 if element is in set
                            # outputs >1 if set is a multi-set
                            # outputs 0 if element is not in set

$ grep -xq 'element' set    # returns 0 (true)  if element is in set
                            # returns 1 (false) if element is not in set

$ awk '$0 == "element" { s=1; exit } END { exit !s }' set
# returns 0 if element is in set, 1 otherwise.

$ awk -v e='element' '$0 == e { s=1; exit } END { exit !s }'

Set Equality设置平等

$ diff -q <(sort set1) <(sort set2) # returns 0 if set1 is equal to set2
                                    # returns 1 if set1 != set2

$ diff -q <(sort set1 | uniq) <(sort set2 | uniq)
# collapses multi-sets into sets and does the same as previous

$ awk '{ if (!($0 in a)) c++; a[$0] } END{ exit !(c==NR/2) }' set1 set2
# returns 0 if set1 == set2
# returns 1 if set1 != set2

$ awk '{ a[$0] } END{ exit !(length(a)==NR/2) }' set1 set2
# same as previous, requires >= gnu awk 3.1.5

Set Cardinality设置基数

$ wc -l set | cut -d' ' -f1    # outputs number of elements in set

$ wc -l < set

$ awk 'END { print NR }' set

Subset Test子集测试

$ comm -23 <(sort subset | uniq) <(sort set | uniq) | head -1
# outputs something if subset is not a subset of set
# does not putput anything if subset is a subset of set

$ awk 'NR==FNR { a[$0]; next } { if !($0 in a) exit 1 }' set subset
# returns 0 if subset is a subset of set
# returns 1 if subset is not a subset of set

Set Union设置联合

$ cat set1 set2     # outputs union of set1 and set2
                    # assumes they are disjoint

$ awk 1 set1 set2   # ditto

$ cat set1 set2 ... setn   # union over n sets

$ cat set1 set2 | sort -u  # same, but assumes they are not disjoint

$ sort set1 set2 | uniq

# sort -u set1 set2

$ awk '!a[$0]++'           # ditto

Set Intersection设置交点

$ comm -12 <(sort set1) <(sort set2)  # outputs insersect of set1 and set2

$ grep -xF -f set1 set2

$ sort set1 set2 | uniq -d

$ join <(sort -n A) <(sort -n B)

$ awk 'NR==FNR { a[$0]; next } $0 in a' set1 set2

Set Complement设置补码

$ comm -23 <(sort set1) <(sort set2)
# outputs elements in set1 that are not in set2

$ grep -vxF -f set2 set1           # ditto

$ sort set2 set2 set1 | uniq -u    # ditto

$ awk 'NR==FNR { a[$0]; next } !($0 in a)' set2 set1

Set Symmetric Difference设置对称差

$ comm -3 <(sort set1) <(sort set2) | sed 's/\t//g'
# outputs elements that are in set1 or in set2 but not both

$ comm -3 <(sort set1) <(sort set2) | tr -d '\t'

$ sort set1 set2 | uniq -u

$ cat <(grep -vxF -f set1 set2) <(grep -vxF -f set2 set1)

$ grep -vxF -f set1 set2; grep -vxF -f set2 set1

$ awk 'NR==FNR { a[$0]; next } $0 in a { delete a[$0]; next } 1;
       END { for (b in a) print b }' set1 set2

Power Set电源组

$ p() { [ $# -eq 0 ] && echo || (shift; p "$@") |
        while read r ; do echo -e "$1 $r\n$r"; done }
$ p `cat set`

# no nice awk solution, you are welcome to email me one:
# peter@catonmat.net

Set Cartesian Product设置笛卡尔积

$ while read a; do while read b; do echo "$a, $b"; done < set1; done < set2

$ awk 'NR==FNR { a[$0]; next } { for (i in a) print i, $0 }' set1 set2

Disjoint Set Test不相交集测试

$ comm -12 <(sort set1) <(sort set2)  # does not output anything if disjoint

$ awk '++seen[$0] == 2 { exit 1 }' set1 set2 # returns 0 if disjoint
                                         # returns 1 if not

Empty Set Test空集测试

$ wc -l < set            # outputs 0  if the set is empty
                         # outputs >0 if the set is not empty

$ awk '{ exit 1 }' set   # returns 0 if set is empty, 1 otherwise

Minimum最低限度

$ head -1 <(sort set)    # outputs the minimum element in the set

$ awk 'NR == 1 { min = $0 } $0 < min { min = $0 } END { print min }'

Maximum最大值

$ tail -1 <(sort set)    # outputs the maximum element in the set

$ awk '$0 > max { max = $0 } END { print max }'

I like @fedorqui's use of awk for setups where one has enough memory to fit all the "remove these" lines: a concise expression of an in-memory approach.我喜欢@fedorqui 使用awk进行设置,其中有足够的内存来容纳所有“删除这些”行:内存中方法的简洁表达。

But for a scenario where the size of the lines to remove is large relative to current memory, and reading that data into an in-memory data structure is an invitation to fail or thrash, consider an ancient approach: sort/join但是对于要删除的行的大小相对于当前内存较大的情况,并且将该数据读入内存中的数据结构会导致失败或崩溃,请考虑使用一种古老的方法:排序/连接

sort main.txt > main_sorted.txt
sort removethese.txt > removethese_sorted.txt

join -t '' -v 1 main_sorted.txt removethese_sorted.txt > output.txt

Notes:笔记:

  • this does not preserve the order from main.txt: lines in output.txt will be sorted这不会保留 main.txt 中的顺序:output.txt 中的行将被排序
  • it requires enough disk to be present to let sort do its thing (temp files), and store same-size sorted versions of the input files它需要足够的磁盘来让 sort 做它的事情(临时文件),并存储输入文件的相同大小的排序版本
  • having join's -v option do just what we want here - print "unpairable" from file 1, drop matches - is a bit of serendipity让 join 的 -v 选项做我们想做的事情 - 从文件 1 打印“不可配对”,删除匹配 - 有点意外
  • it does not directly address locales, collating, keys, etc. - it relies on defaults of sort and join (-t with an empty argument) to match sort order, which happen to work on my current machine它不直接解决语言环境、整理、键等 - 它依赖于排序和连接的默认值(-t 带有空参数)来匹配排序顺序,这恰好适用于我当前的机器

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM