[英]Pyspark - Calculate number of null values in each dataframe column
I have a dataframe with many columns.我有一个包含许多列的数据框。 My aim is to produce a dataframe thats lists each column name, along with the number of null values in that column.
我的目标是生成一个列出每个列名称的数据框,以及该列中空值的数量。
Example:例子:
+-------------+-------------+
| Column_Name | NULL_Values |
+-------------+-------------+
| Column_1 | 15 |
| Column_2 | 56 |
| Column_3 | 18 |
| ... | ... |
+-------------+-------------+
I have managed to get the number of null values for ONE column like so:我已经设法获得一列的空值数量,如下所示:
df.agg(F.count(F.when(F.isnull(c), c)).alias('NULL_Count'))
where c
is a column in the dataframe.其中
c
是数据框中的一列。 However, it does not show the name of the column.但是,它不显示列的名称。 The output is:
输出是:
+------------+
| NULL_Count |
+------------+
| 15 |
+------------+
Any ideas?有任何想法吗?
You can use a list comprehension to loop over all of your columns in the agg
, and use alias
to rename the output column:您可以使用列表推导循环遍历
agg
中的所有列,并使用alias
重命名输出列:
import pyspark.sql.functions as F
df_agg = df.agg(*[F.count(F.when(F.isnull(c), c)).alias(c) for c in df.columns])
However, this will return the results in one row as shown below:但是,这将在一行中返回结果,如下所示:
df_agg.show()
#+--------+--------+--------+
#|Column_1|Column_2|Column_3|
#+--------+--------+--------+
#| 15| 56| 18|
#+--------+--------+--------+
If you wanted the results in one column instead, you could union each column from df_agg
using functools.reduce
as follows:如果您希望将结果放在一列中,您可以使用
functools.reduce
将df_agg
每一列df_agg
,如下所示:
from functools import reduce
df_agg_col = reduce(
lambda a, b: a.union(b),
(
df_agg.select(F.lit(c).alias("Column_Name"), F.col(c).alias("NULL_Count"))
for c in df_agg.columns
)
)
df_agg_col.show()
#+-----------+----------+
#|Column_Name|NULL_Count|
#+-----------+----------+
#| Column_1| 15|
#| Column_2| 56|
#| Column_3| 18|
#+-----------+----------+
Or you can skip the intermediate step of creating df_agg
and do:或者您可以跳过创建
df_agg
的中间步骤并执行以下操作:
df_agg_col = reduce(
lambda a, b: a.union(b),
(
df.agg(
F.count(F.when(F.isnull(c), c)).alias('NULL_Count')
).select(F.lit(c).alias("Column_Name"), "NULL_Count")
for c in df.columns
)
)
Scala alternative could be Scala 替代方案可能是
case class Test(id:Int, weight:Option[Int], age:Int, gender: Option[String])
val df1 = Seq(Test(1, Some(100), 23, Some("Male")), Test(2, None, 25, None), Test(3, None, 33, Some("Female"))).toDF()
df1.show()
+---+------+---+------+
| id|weight|age|gender|
+---+------+---+------+
| 1| 100| 23| Male|
| 2| null| 25| null|
| 3| null| 33|Female|
+---+------+---+------+
val s = df1.columns.map(c => sum(col(c).isNull.cast("integer")).alias(c))
val df2 = df1.agg(s.head, s.tail:_*)
val t = df2.columns.map(c => df2.select(lit(c).alias("col_name"), col(c).alias("null_count")))
val df_agg_col = t.reduce((df1, df2) => df1.union(df2))
df_agg_col.show()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.