![](/img/trans.png)
[英]Pandas DataFrame find the max after Groupby two columns and get counts
[英]Best way to get null counts, min and max values of multiple (100+) columns from a pyspark dataframe
假設我有一個列名列表,它們都存在於數據框中
Cols = ['A', 'B', 'C', 'D'],
我正在尋找一種快速獲取表格/數據框的方法,例如
NA_counts min max
A 5 0 100
B 10 0 120
C 8 1 99
D 2 0 500
TIA
您可以單獨計算每個指標,然后像這樣合並所有指標:
nulls_cols = [sum(when(col(c).isNull(), lit(1)).otherwise(lit(0))).alias(c) for c in cols]
max_cols = [max(col(c)).alias(c) for c in cols]
min_cols = [min(col(c)).alias(c) for c in cols]
nulls_df = df.select(lit("NA_counts").alias("count"), *nulls_cols)
max_df = df.select(lit("Max").alias("count"), *max_cols)
min_df = df.select(lit("Min").alias("count"), *min_cols)
nulls_df.unionAll(max_df).unionAll(min_df).show()
輸出示例:
+---------+---+---+----+----+
| count| A| B| C| D|
+---------+---+---+----+----+
|NA_counts| 1| 0| 3| 1|
| Max| 9| 5|Test|2017|
| Min| 1| 0|Test|2010|
+---------+---+---+----+----+
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.