[英]How to eliminate row and column name values from the dataframe result in pyspark?
Hi i am loading a csv file into data frame, and running a filter operation on the data frame, and i am getting output as below 嗨,我正在将一个csv文件加载到数据帧,并在数据帧上运行筛选操作,我得到如下输出
[Row(table_name=u'DEMO', rec_count=u'170049', col_count=u'36')]
How can i get the output like below 我如何获得如下输出
`['DEMO','170049','36']`
i tried uni-coding and i can use for loop to iterate the data but problem the data is dynamic some time i get more than three values but i want to automate the process but i am unable to get the data as above 我尝试了uni编码,我可以使用for循环来迭代数据,但是问题是数据有时是动态的,我得到了三个以上的值,但是我想使过程自动化,但是我无法如上
You have a list whose element is a Row object; 您有一个列表,其元素为Row对象; You could use a keys list to define the columns and corresponding order you need in the result and then extract them from the Row object with a list comprehension: 您可以使用键列表来定义结果中所需的列和相应顺序,然后使用列表理解从Row对象中提取它们:
# this is what you have now
x = [Row(table_name=u'DEMO', rec_count=u'170049', col_count=u'36')]
keys = ['table_name', 'rec_count', 'col_count']
[x[0][key] for key in keys]
# [u'DEMO', u'170049', u'36']
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.