[英]TypeError: 'GroupedData' object is not iterable in pyspark dataframe
[英]TypeError: 'GroupedData' object is not iterable in pyspark
我正在使用Spark版本2.0.1和python 2.7。 我正在运行以下代码
# This will return a new DF with all the columns + id
data1 = data.withColumn("id", monotonically_increasing_id()) # Create an integer index
data1.show()
def create_indexes(df,
fields=['country', 'state_id', 'airport', 'airport_id']):
""" Create indexes for the different element ids
for CMRs. This allows us to select CMRs that match
a given element and element value very quickly.
"""
if fields == None:
print("No fields specified, returning")
return
for field in fields:
if field not in df.columns:
print('field: ', field, " is not in the data...")
return
indexes = {}
for field in fields:
print(field)
res = df.groupby(field)
index = {label: np.array(vals['id'], np.int32) for label, vals in res}
indexes[field] = index
return indexes
# Create indexes. Some of them take a lot of time!
#Changed dom_client_id by gbl_buy_grp_id as it was changed in Line Number
indexes = create_indexes(data1, fields=['country', 'state_id', 'airport', 'airport_id'])
print type(indexes)
运行此代码时出现以下错误消息
TypeError: 'GroupedData' object is not iterable
您能帮我解决这个问题吗?
您必须对GroupedData进行汇总并收集结果,然后才能遍历它们,例如,对每个组计数项目: res = df.groupby(field).count().collect()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.