简体   繁体   中英

Adding serial number to each of the records in a text file using Hadoop MapReduce

I'm looking for a solution here to my simple silly problem here. Say, I have a huge 10GB text file with records delimited by '\\n' and I provide the file as a input to the Hadoop framework. The output should be a file such that it maintains the same order of the records as the input file but with a serial number in front of every record.

For eg,

If I have a input text file say,

this is line one
this is line two
this is line three
this is line four
-----------------
-----------------

output file should be,

1 this is line one
2 this is line two
3 this is line three
4 this is line four
------------------
-----------------

Edit: Say instead of a 10GB file I have a 10 TB file, so what could be done now? Instead of handling the file using Hadoop way what could be the other best approach to do it rather faster?

Moreover, I also want to use multiple reducer not a single reducer.

I agree with pap, no need for Hadoop here. Check the command nl , it adds the line number before each line of the file. Just store the output in a new file.

$ cat testFile
line1
line2
line3

$ nl testFile
   1   line1
   2   line2
   3   line3

与仅打开文件,逐行读取并将每行存储在新文件(序列/序列号前置)相比,不确定您是否会从Hadoop中获益于此类微不足道的操作。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM