简体   繁体   English

如何在多步骤map-reduce程序中一次运行最终的“打印”语句?

[英]How to run a final 'print' statement once in a multi-step map-reduce program?

I am basically trying to implement a recommender system by scaling it up on Hadoop. 我基本上是在尝试通过在Hadoop上进行扩展来实现推荐系统。

In the first step, I am trying to calculate the similarity between every pair of items in the input file.If I store it simply as 第一步,我试图计算输入文件中每对项目之间的相似度。

{Item A,Item B,Similarity} {项目A,项目B,相似度}

the output file size becomes very very large ( for 60kb input i am getting an output file size of 6mb). 输出文件的大小变得非常大(对于60kb的输入,我得到的输出文件大小为6mb)。

Therefore I thought whether it will be better to store the result in python dict and print the dict ONLY ONCE after the end of the entire map reduce program.I am unsuccessful in doing this Please help me. 因此,我认为将结果存储在python dict中并在整个map reduce程序结束后仅打印dict一次会更好。我无法成功完成此操作,请帮助我。

My python code is: 我的python代码是:

#!/usr/bin/env python
from mrjob.job import MRJob
from math import sqrt

from itertools import combinations

PRIOR_COUNT = 10

PRIOR_CORRELATION = 0

    prefs={}

    def correlation(size, dot_product, rating_sum, \
        rating2sum, rating_norm_squared, rating2_norm_squared):
'''
  The correlation between two vectors A, B is
      [n * dotProduct(A, B) - sum(A) * sum(B)] /
    sqrt{ [n * norm(A)^2 - sum(A)^2] [n * norm(B)^2 - sum(B)^2] }

'''
numerator = size * dot_product - rating_sum * rating2sum
denominator = sqrt(size * rating_norm_squared - rating_sum * rating_sum) * \
                sqrt(size * rating2_norm_squared - rating2sum * rating2sum)

return (numerator / (float(denominator))) if denominator else 0.0


def regularized_correlation(size, dot_product, rating_sum, \
        rating2sum, rating_norm_squared, rating2_norm_squared,
        virtual_cont, prior_correlation):
    '''
    The Regularized Correlation between two vectors A, B

    RegularizedCorrelation = w * ActualCorrelation + (1 - w) * PriorCorrelation
        where w = # actualPairs / (# actualPairs + # virtualPairs).
    '''
    unregularizedCorrelation = correlation(size, dot_product, rating_sum, \
            rating2sum, rating_norm_squared, rating2_norm_squared)

    w = size / float(size + virtual_cont)

    return w * unregularizedCorrelation + (1.0 - w) * prior_correlation

class SemicolonValueProtocol(object):

  # don't need to implement read() since we aren't using it

  def write(self, key, values):
      return ';'.join(str(v) for v in values)

class BooksSimilarities(MRJob):

#OUTPUT_PROTOCOL = SemicolonValueProtocol

def steps(self):
    return [
        self.mr(mapper=self.group_by_user_rating,
                reducer=self.count_ratings_users_freq),
        self.mr(mapper=self.pairwise_items,
                reducer=self.calculate_similarity),
        self.mr(mapper=self.calculate_ranking,
                reducer=self.top_similar_items)]

def group_by_user_rating(self, key, line):
    '''
    Emit the user_id and group by their ratings (item and rating)

    17  70,3
    35  21,1
    49  19,2
    49  21,1
    49  70,4
    87  19,1
    87  21,2
    98  19,2

    '''
    line=line.replace("\"","");
    user_id, item_id, rating = line.split(',')

    yield  user_id, (item_id, float(rating))

def count_ratings_users_freq(self, user_id, values):
    '''
    For each user, emit a row containing their "postings"
    (item,rating pairs)
    Also emit user rating sum and count for use later steps.

    17    1,3,(70,3)
    35    1,1,(21,1)
    49    3,7,(19,2 21,1 70,4)
    87    2,3,(19,1 21,2)
    98    1,2,(19,2)

    '''
    item_count = 0
    item_sum = 0
    final = []
    for item_id, rating in values:
        item_count += 1
        item_sum += rating
        final.append((item_id, rating))

    yield user_id, (item_count, item_sum, final)

def pairwise_items(self, user_id, values):
    '''
    The output drops the user from the key entirely, instead it emits
    the pair of items as the key:

    19,21  2,1
    19,70  2,4
    21,70  1,4
    19,21  1,2

    '''
    item_count, item_sum, ratings = values
    for item1, item2 in combinations(ratings, 2):
        yield (item1[0], item2[0]), (item1[1], item2[1])

def calculate_similarity(self, pair_key, lines):
    '''
    Sum components of each corating pair across all users who rated both
    item x and item y, then calculate pairwise pearson similarity and
    corating counts.  The similarities are normalized to the [0,1] scale
    because we do a numerical sort.

    19,21   0.4,2
    21,19   0.4,2
    19,70   0.6,1
    70,19   0.6,1
    21,70   0.1,1
    70,21   0.1,1

    '''
    sum_xx, sum_xy, sum_yy, sum_x, sum_y, n = (0.0, 0.0, 0.0, 0.0, 0.0, 0)
    item_pair, co_ratings = pair_key, lines
    item_xname, item_yname = item_pair
    for item_x, item_y in lines:
        sum_xy += item_x * item_y
        sum_y += item_y
        sum_x += item_x
        sum_xx += item_x * item_x
        sum_yy += item_y * item_y
        n += 1

    reg_corr_sim = regularized_correlation(n, sum_xy, sum_x, \
            sum_y, sum_xx, sum_yy, PRIOR_COUNT, PRIOR_CORRELATION)

    yield (item_xname, item_yname), (reg_corr_sim, n)


def calculate_ranking(self, item_keys, values):
    '''
    Emit items with similarity in key for ranking:

    19,0.4    70,1
    19,0.6    21,2
    21,0.6    19,2
    21,0.9    70,1
    70,0.4    19,1
    70,0.9    21,1

    '''
    reg_corr_sim, n = values
    item_x, item_y = item_keys
    if int(n) > 0:
        yield (item_x, reg_corr_sim),(item_y, n)

def top_similar_items(self, key_sim, similar_ns):
    '''
    For each item emit K closest items in comma separated file:

    De La Soul;A Tribe Called Quest;0.6;1
    De La Soul;2Pac;0.4;2

    '''
    item_x, reg_corr_sim = key_sim
    for item_y, n in similar_ns:
           #yield None, (item_x, item_y, reg_corr_sim, n)
       prefs.setdefault(item_x,{})
       prefs[item_x][item_y] = float(reg_corr_sim)
       prefs.setdefault(item_y,{})
       prefs[item_y][item_x] = float(reg_corr_sim) 
    print "exiting"

if __name__ == '__main__':
   BooksSimilarities.run()

So what I want after executing 所以执行后我想要什么

python thisfile.py < input.csv -r hadoop > output.txt python thisfile.py <input.csv -r hadoop> output.txt

is a relatively small output file with no repetitions and one dict. 是一个相对较小的输出文件,没有重复,只有一格。

In short, 简而言之,

Currently this program prints exiting n times but i want it to print only ONCE. 目前,该程序打印退出 n次,但我希望它仅打印一次。

Apart from all this is there any better way to implement the collaborative filtering by scaling up on hadoop in a better manner. 除此之外,还有更好的方法可以通过更好地扩展hadoop来实现协作过滤。

Thanks a ton in advance. 在此先感谢一吨。

You only have the guarantee that the values ​​with the same key will go to the same reducer. 您仅保证具有相同键的值将分配给相同的化简器。 So if you are running on your cluster multiple reducers, the work is divided and you will have many "exiting" as reducers run to complete the task on all your keys. 因此,如果您在集群上运行多个reducer,则工作将被划分,并且当reducer运行以完成所有键上的任务时,您将有许多“退出”。

Try to run in local and validate if its working: python thisfile.py < input.csv > output.txt 尝试在本地运行并验证其是否有效:python thisfile.py <input.csv> output.txt

Maybe you can define a "reducer_final" in your steps() to get all the last step reducer output and manage like you want. 也许您可以在steps()中定义一个“ reducer_final”,以获取所有最后一步的reducer输出并根据需要进行管理。

Check: http://pythonhosted.org/mrjob/job.html#mrjob.job.MRJob.steps 检查: http : //pythonhosted.org/mrjob/job.html#mrjob.job.MRJob.steps

Kind regards, 亲切的问候,

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM