[英]python 2.7 : create dictionary from list of sets
After performing some operations I get a list
of set
as following : 执行一些操作后,我得到一
set
的list
,如下所示:
from pyspark.mllib.fpm import FPGrowth
FreqItemset(items=[u'A_String_0'], freq=303)
FreqItemset(items=[u'A_String_0', u'Another_String_1'], freq=302)
FreqItemset(items=[u'B_String_1', u'A_String_0', u'A_OtherString_1'], freq=301)
I'd like to create from this list : 我想从此列表创建:
RDD RDD
Dictionary , for example : 字典,例如:
key: A_String_0 value: 303 key: A_String_0,Another_String_1 value: 302 key: B_String_1,A_String_0,A_OtherString_1 value: 301
I'd like to continue with calculations to produce Confidence and Lift 我想继续进行计算以产生信心和提升
I tried to execute for
loop to get each item from list . 我试图执行
for
循环以从list中获取每个项目。
The question is if there is another , better way to create rdd and/or lists here ? 问题是,是否还有另一种更好的方法可以在此处创建rdd和/或列表?
Thank you in advance . 先感谢您 。
If you want a RDD
simply don't collect freqItemsets
如果您需要
RDD
请不要收集freqItemsets
model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10) freqItemsets = model.freqItemsets()
you can of course parallelize
你当然可以
parallelize
result = model.freqItemsets().collect() sc.parallelize(result) 结果= model.freqItemsets()。collect()sc.parallelize(结果)
I am not sure why you need this (it looks like a XY problem but you can use comprehensions on the collected data: 我不确定为什么需要这样做(这看起来像是XY问题,但是您可以对收集的数据使用理解:
{tuple(x.items): x.freq for x in result}
or 要么
{",".join(x.items): x.freq for x in result}
Generally speaking if you want to apply further transformations on your data don't collect and process data directly in Spark. 一般来说,如果您想对数据进行进一步的转换,请不要直接在Spark中收集和处理数据。
Also you should take a look at the Scala API. 您还应该看看Scala API。 It already implements association rules .
它已经实现了关联规则 。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.