![](/img/trans.png)
[英]Faster way to remove duplicates from a very large text file in Python?
[英]The optimal way to remove duplicates from a list of sorted very large files (200G each)?
我每个都有一系列大文件(200 G),每个文件都经过排序,并包含如下所示的重复项:
50.21.180.100|a.ac
50.21.180.100|a.ac
50.21.180.100|a.ac
50.21.180.100|a.ac
50.21.180.100|a.ac
50.21.180.100| b.ac
50.21.180.100| b.ac
50.21.180.100|b.ac
50.21.180.100|b.ac
50.21.180.100|b.ac
50.21.180.100| c.ac
50.21.180.100| c.ac
50.21.180.100|c.ac
50.21.180.100|c.ac
50.21.180.100|c.ac
50.21.180.100|c.ac
50.21.180.100| d.ac
预期产量:
50.21.180.100|a.ac
50.21.180.100|b.ac
50.21.180.100|c.ac
50.21.180.100|d.ac
是否有任何机构建议删除这些重复项的最佳方法(在时间和记忆方面)? 是Linux bash还是Python或其他语言?
首先删除空间,然后运行uniq:
cat infile.txt | tr -d " " | uniq > outfile.txt
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.