简体   繁体   English

使用Hadoop MapReduce为文本文件中的每个记录添加序列号

[英]Adding serial number to each of the records in a text file using Hadoop MapReduce

I'm looking for a solution here to my simple silly problem here. 我在这里寻找解决这个简单愚蠢问题的解决方案。 Say, I have a huge 10GB text file with records delimited by '\\n' and I provide the file as a input to the Hadoop framework. 说,我有一个巨大的10GB文本文件,其中的记录由'\\ n'分隔,我提供该文件作为Hadoop框架的输入。 The output should be a file such that it maintains the same order of the records as the input file but with a serial number in front of every record. 输出应该是一个文件,以便它保持与输入文件相同的记录顺序,但在每个记录前面都有一个序列号。

For eg, 例如,

If I have a input text file say, 如果我有输入文本文件说,

this is line one
this is line two
this is line three
this is line four
-----------------
-----------------

output file should be, 输出文件应该是,

1 this is line one
2 this is line two
3 this is line three
4 this is line four
------------------
-----------------

Edit: Say instead of a 10GB file I have a 10 TB file, so what could be done now? 编辑:说而不是10GB文件我有一个10 TB的文件,那么现在可以做些什么呢? Instead of handling the file using Hadoop way what could be the other best approach to do it rather faster? 而不是使用Hadoop方式处理文件可能是另一种最好的方法来做得更快?

Moreover, I also want to use multiple reducer not a single reducer. 而且,我还想使用多个减速机而不是单个减速机。

I agree with pap, no need for Hadoop here. 我同意pap,这里不需要Hadoop。 Check the command nl , it adds the line number before each line of the file. 检查命令nl ,它在文件的每一行之前添加行号。 Just store the output in a new file. 只需将输出存储在新文件中即可。

$ cat testFile
line1
line2
line3

$ nl testFile
   1   line1
   2   line2
   3   line3

与仅打开文件,逐行读取并将每行存储在新文件(序列/序列号前置)相比,不确定您是否会从Hadoop中获益于此类微不足道的操作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM