![](/img/trans.png)
[英]Pandas dataframe find distinct value count for each group in other columns
[英]Get the distinct elements of each group by other field on a Spark 1.6 Dataframe
我正在嘗試按星期在Spark數據框中進行分組,並為每個組計算一列的唯一值:
test.json
{"name":"Yin", "address":1111111, "date":20151122045510}
{"name":"Yin", "address":1111111, "date":20151122045501}
{"name":"Yln", "address":1111111, "date":20151122045500}
{"name":"Yun", "address":1111112, "date":20151122065832}
{"name":"Yan", "address":1111113, "date":20160101003221}
{"name":"Yin", "address":1111111, "date":20160703045231}
{"name":"Yin", "address":1111114, "date":20150419134543}
{"name":"Yen", "address":1111115, "date":20151123174302}
和代碼:
import pyspark.sql.funcions as func
from pyspark.sql.types import TimestampType
from datetime import datetime
df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))
df_g.count().distinct().show()
pyspark的結果是
df_y.groupby(df_y.name).count().distinct().show()
+----+-----+
|name|count|
+----+-----+
| Yan| 1|
| Yun| 1|
| Yin| 4|
| Yen| 1|
| Yln| 1|
+----+-----+
而我對大熊貓的期待是這樣的:
df = df_y.toPandas()
df.groupby('name').address.nunique()
Out[51]:
name
Yan 1
Yen 1
Yin 2
Yln 1
Yun 1
如何通過其他字段獲取每個組的唯一元素,例如地址?
有一種方法可以使用函數countDistinct
對每組的不同元素進行計數:
import pyspark.sql.functions as func
from pyspark.sql.types import TimestampType
from datetime import datetime
df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))
df_y.groupby(df_y.name).agg(func.countDistinct('address')).show()
+----+--------------+
|name|count(address)|
+----+--------------+
| Yan| 1|
| Yun| 1|
| Yin| 2|
| Yen| 1|
| Yln| 1|
+----+--------------+
文檔可在[這里]( https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/functions.html#countDistinct ( org.apache.spark.sql)。列 ,org.apache.spark.sql.Column ...))。
通過字段“_c1”對groupby進行簡明直接的回答,並計算字段“_c2”中不同數量的值:
import pyspark.sql.functions as F
dg = df.groupBy("_c1").agg(F.countDistinct("_c2"))
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.