简体   繁体   中英

Unordered set or similar in Spark?

I have data of this format:

(123456, (43, 4861))

(000456, (43, 4861))

where the first term is the point id, where the second term is a pair, where its first id is a cluster-centroid and the second id is another cluster-centroid. So that says that point 123456 is assigned to clusters 43 and 4861.

What I am trying to do is to create data of this format:

(43, [123456, 000456])

(4861, [123456, 000456])

where the idea is that every centroid has a list of the points that are assigned to it. That list must be at max of length 150.

Is there anything I could use in or which would make my life easier?


I do not care of fast access and order. I have 100m points and 16k centroids.


Here is some artificial data that I use to play with:

data = []
from random import randint
for i in xrange(0, 10):
    data.append((randint(0, 100000000), (randint(0, 16000), randint(0, 16000))))
data = sc.parallelize(data)

Judging from what you described (although I still don't quite get it), here is a naive approach using Python:

In [1]: from itertools import groupby

In [2]: from random import randint

In [3]: data = []  # create random samples as you did
   ...: for i in range(10):
   ...:     data.append((randint(0, 100000000), (randint(0, 16000), randint(0, 16000))))
   ...:

In [4]: result = []  # create a intermediate list to transform your sample
   ...: for point_id, cluster in data:
   ...:     for index, c in enumerate(cluster):
                # I made it up following your pattern
   ...:         result.append((c, [point_id, str(index * 100).zfill(3) + str(point_id)[-3:]]))
        # sort the result by point_id as key for grouping
   ...: result = sorted(result, key=lambda x: x[1][0])
   ...:

In [5]: result[:3]
Out[5]:
[(4020, [5002188, '000188']),
 (10983, [5002188, '100188']),
 (10800, [24763401, '000401'])]

In [6]: capped_result = []
        # basically groupby sorted point_id and cap the list max at 150
   ...: for _, g in groupby(result, key=lambda x: x[1][0]):
   ...:     grouped = list(g)[:150]
   ...:     capped_result.extend(grouped)
        # final result will be like
   ...: print(capped_result)
   ...:
[(4020, [5002188, '000188']), (10983, [5002188, '100188']), (10800, [24763401, '000401']), (12965, [24763401, '100401']), (6369, [24924435, '000435']), (429, [24924435, '100435']), (7666, [39240078, '000078']), (2526, [39240078, '100078']), (5260, [47597265, '000265']), (7056, [47597265, '100265']), (2824, [60159219, '000219']), (5730, [60159219, '100219']), (7837, [67208338, '000338']), (12475, [67208338, '100338']), (4897, [80084812, '000812']), (13038, [80084812, '100812']), (2944, [80253323, '000323']), (1922, [80253323, '100323']), (12777, [96811112, '000112']), (5463, [96811112, '100112'])]

Of course this isn't optimised at all but will give you a head start how you can tackle this problem. I hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM