简体   繁体   中英

Understanding Map-Reduce

So this has always confused me. I'm not sure exactly how map-reduce works and I seem to get lost in the exact chain of events.

My understanding:

  1. Master Chunks up files and hands them to mappers as (K1, V1)
  2. Mappers will take files and perform a Map(K1,V1)-> (K2,V2) and output this data into individual files.
  3. THIS IS WHERE I'M LOST.
    1. So do these individual files get combined some how? What if keys are repeated in each file?
    2. Who does this combining? Is it the master? If all the files go into the Master at this step, wont their be a massive bottleneck? Does it all get combined into one file? Are the files re-chunked and handed to the reducers now?
    3. OR, If all the files go directly to the reducers instead, what happens with the repeated K3's in the (K3, V3) files at the end of the process? How are they combined? Is there another Map-Reduce phase? And if so, do we need to create new operations: Map(K3,V3)->(K4,V4), Reduce(K4,V4)->(K3,V3)

I think to sum up, I just dont get how the files are being re-combined properly and its causing my map-reduce logic to fail.

Step 3 is called the "shuffle". It's one of the main value-adds of map reduce frameworks, although it's also very expensive for large datasets. The framework does something akin to a GROUP BY operation on the complete set of records output by all the mappers, and then reducers are called with each group of records. To answer your individual questions for 3:

3.1. Imagine that your job is configured to have r total reducers. The framework carves up every one of the map output files into r pieces and sends each piece to one reducer task. With m total mappers, that is mr little slices flying around. When a particular reducer has received all the slices it needs, it merges them all together and sorts the result by the K2 key, and then groups records on the fly by that key for individual calls to reduce(). If there are duplicate K2 keys, the group will be larger than a singleton. In fact, that is the point. If your mappers don't ever output identical keys, then your algorithm should not even need a reduce phase and you can skip the expensive shuffle altogether.

3.2. The load of doing all that data movement is spread across the whole cluster because each reducer task knows what outputs it wants and asks for them from each mapper. The only thing the master node has to do is coordinate, ie , tell each reducer when to start pulling mapper outputs, watch for dead nodes, and keep track of everyone's progress.

3.3. Reducer output is not examined by the framework or combined in any way. However many reducer tasks you have ( r ), that's how many output files with K3, V3 records in them you will get. If you need that combined again, run another job on that output.

Before one reads this answer , please take some timeout to read about merge-sort (divide and conquer approach )

Here is the complete set of actions happening behind the scene by the framework

  1. Clients submits a mapreduce job . While the jobsubmission is happening

    • FileInputFormat decides how to divide the files into Splits(Split = 1 or more hdfs blocks depending on your splitsize).
  2. JobTracker figures out where are the splits are located and spawns mappers close to the split, the priority of locality is (1. data local , 2. rack local , 3. network hop local )

  3. Mappers read the data (Record Readers provided by FileInputFormat) and produce k1->v1

  4. This data is locally saved into the local filesystem where the mappers are running ,the trick here is the data saved on localfilesystem is "SORTED" and stored in partitions (equal to number of reducers)

    • Before saving to disk if Combiner is enabled , for a given key , the values are merged depening on the combiner logic (Mostly same as reduce),data is sorted and saved onto disk

5 . Each reducer pulls data from the mappers from their corresponding partitions (Dont forget all the data pulled by the reducer is sorted)

{
k1->v1
k1->v2 
K2->v3
}

Reducer opens up filepointers to all the sorted files pulled from mappers and merges them . (Group and sort comparator are used while merging) As the merging is happening from sorted files , the output of the reducer is sorted and saved onto hdfs

This step is somewhat similar to merge sort "merge step"

Please go through http://bytepadding.com/big-data/map-reduce/understanding-map-reduce-the-missing-guide/ for a pictorial representation of the same

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM