简体   繁体   中英

python 2.7 : create dictionary from list of sets

After performing some operations I get a list of set as following :

from pyspark.mllib.fpm import FPGrowth

FreqItemset(items=[u'A_String_0'], freq=303)
FreqItemset(items=[u'A_String_0', u'Another_String_1'], freq=302)
FreqItemset(items=[u'B_String_1', u'A_String_0', u'A_OtherString_1'], freq=301)

I'd like to create from this list :

  1. RDD

  2. Dictionary , for example :

     key: A_String_0 value: 303 key: A_String_0,Another_String_1 value: 302 key: B_String_1,A_String_0,A_OtherString_1 value: 301 

I'd like to continue with calculations to produce Confidence and Lift

I tried to execute for loop to get each item from list .

The question is if there is another , better way to create rdd and/or lists here ?

Thank you in advance .

  1. If you want a RDD simply don't collect freqItemsets

     model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10) freqItemsets = model.freqItemsets() 

    you can of course parallelize

    result = model.freqItemsets().collect() sc.parallelize(result)

  2. I am not sure why you need this (it looks like a XY problem but you can use comprehensions on the collected data:

     {tuple(x.items): x.freq for x in result} 

    or

     {",".join(x.items): x.freq for x in result} 

Generally speaking if you want to apply further transformations on your data don't collect and process data directly in Spark.

Also you should take a look at the Scala API. It already implements association rules .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM