简体   繁体   中英

How to get distinct value, count of a column in dataframe and store in another dataframe as (k,v) pair using Spark2 and Scala

I want to get the distinct values and their respective counts of every column of a dataframe and store them as (k,v) in another dataframe. Note: My Columns are not static, they keep changing. So, I cannot hardcore the column names instead I should loop through them.

For Example, below is my dataframe

+----------------+-----------+------------+
|name            |country    |DOB         |
+----------------+-----------+------------+
|       Blaze    |        IND|    19950312|
|       Scarlet  |        USA|    19950313|
|       Jonas    |        CAD|    19950312|
|       Blaze    |        USA|    19950312|
|       Jonas    |        CAD|    19950312|
|       mark     |        USA|    19950313|
|       mark     |        CAD|    19950313|
|       Smith    |        USA|    19950313|
|       mark     |        UK |    19950313|
|       scarlet  |        CAD|    19950313|

My final result should be created in a new dataframe as (k,v) where k is the distinct record and v is the count of it.

+----------------+-----------+------------+
|name            |country    |DOB         |
+----------------+-----------+------------+
|   (Blaze,2)    |   (IND,1) |(19950312,3)|
|   (Scarlet,2)  |   (USA,4) |(19950313,6)|
|   (Jonas,3)    |   (CAD,4) |            |
|   (mark,3)     |   (UK,1)  |            |
|   (smith,1)    |           |            |

Can anyone please help me with this, I'm using Spark 2.4.0 and Scala 2.11.12

Note: My columns are dynamic, so I can't hardcore the columns and do groupby on them.

I don't have exact solution to your query but I can surely provide you with some help that can get you started working on your issue.

Create dataframe

scala> val df = Seq(("Blaze  ","IND","19950312"),
     | ("Scarlet","USA","19950313"),
     | ("Jonas  ","CAD","19950312"),
     | ("Blaze  ","USA","19950312"),
     | ("Jonas  ","CAD","19950312"),
     | ("mark   ","USA","19950313"),
     | ("mark   ","CAD","19950313"),
     | ("Smith  ","USA","19950313"),
     | ("mark   ","UK ","19950313"),
     | ("scarlet","CAD","19950313")).toDF("name", "country","dob")

Next calculate count of distinct element of each column

scala> val distCount = df.columns.map(c => df.groupBy(c).count)

Create a range to iterate over distCount

scala> val range = Range(0,distCount.size)
range: scala.collection.immutable.Range = Range(0, 1, 2)

Aggregate your data

scala> val aggVal = range.toList.map(i => distCount(i).collect().mkString).toSeq
aggVal: scala.collection.immutable.Seq[String] = List([Jonas  ,2][Smith  ,1][Scarlet,1][scarlet,1][mark   ,3][Blaze  ,2], [CAD,4][USA,4][IND,1][UK ,1], [19950313,6][19950312,4])

Create data frame:

scala> Seq((aggVal(0),aggVal(1),aggVal(2))).toDF("name", "country","dob").show()

+--------------------+--------------------+--------------------+
|                name|             country|                 dob|
+--------------------+--------------------+--------------------+
|[Jonas  ,2][Smith...|[CAD,4][USA,4][IN...|[19950313,6][1995...|
+--------------------+--------------------+--------------------+

I hope this helps you in some way.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM