简体   繁体   中英

Calculate TF-IDF grouped by column

How can i calculate tf-idf grouped by column not on the whole dataframe?

Suppose in dataframe like below

private val sample = Seq(
    (1, "A B C D E"),
    (1, "B C D"),
    (1, "B C D E"),
    (2, "B C D F"),
    (2, "A B C"),
    (2, "B C E F G")
  ).toDF("id","sentences")

In the above sample, IDF should be calculated for sentences with id = 1 by considering first three elements. Same way IDF should be calculated for sentences with Id=2 by considering last three elements. Is it possible in Spark ml's tf-idf implementation.

Just a lame attempt: you could filter your sequence by id and and convert each filter to dataframe and save them inside a list, then use a loop to apply your tf-idf to each dataframe in your list.

var filters=List[org.apache.spark.sql.DataFrame]()
val mySeq=Seq((1, "A B C D E"),(1, "B C D"),(1, "B C D E"),(2, "B C D F"),(2, "A B C"),(2, "B C E F G")) 
for(i<-List(1,2)){filters=filters:+s.filter{case x=>x._1==i}.toDF("id","sentences")}   

So for example you have

scala> filters(0).show()
+---+---------+
| id|sentences|
+---+---------+
|  1|A B C D E|
|  1|    B C D|
|  1|  B C D E|
+---+---------+

scala> filters(1).show()
+---+---------+
| id|sentences|
+---+---------+
|  2|  B C D F|
|  2|    A B C|
|  2|B C E F G|
+---+---------+

and you can do your TF-IDF calculation on each dataframe by using a loop or a map .

You could also use some sort of groupBy but this operation requires shuffles which could decrease your performance in a cluster

You can group the dataframe by id and flatten the corresponding tokenized words prior to the TF-IDF computation. Below is a snippet using the sample code from the Spark TF-IDF doc:

val sample = Seq(
  (1, "A B C D E"),
  (1, "B C D"),
  (1, "B C D E"),
  (2, "B C D F"),
  (2, "A B C"),
  (2, "B C E F G")
).toDF("id","sentences")

import org.apache.spark.sql.functions._
import org.apache.spark.ml.feature.{HashingTF, IDF, Tokenizer}

val tokenizer = new Tokenizer().setInputCol("sentences").setOutputCol("words")
val wordsDF = tokenizer.transform(sample)

def flattenWords = udf( (s: Seq[Seq[String]]) => s.flatMap(identity) )

val groupedDF = wordsDF.groupBy("id").
  agg(flattenWords(collect_list("words")).as("grouped_words"))

val hashingTF = new HashingTF().
  setInputCol("grouped_words").setOutputCol("rawFeatures").setNumFeatures(20)
val featurizedData = hashingTF.transform(groupedDF)
val idf = new IDF().setInputCol("rawFeatures").setOutputCol("features")
val idfModel = idf.fit(featurizedData)
val rescaledData = idfModel.transform(featurizedData)

rescaledData.show
// +---+--------------------+--------------------+--------------------+
// | id|       grouped_words|         rawFeatures|            features|
// +---+--------------------+--------------------+--------------------+
// |  1|[a, b, c, d, e, b...|(20,[1,2,10,14,18...|(20,[1,2,10,14,18...|
// |  2|[b, c, d, f, a, b...|(20,[1,2,8,10,14,...|(20,[1,2,8,10,14,...|
// +---+--------------------+--------------------+--------------------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM