繁体   English   中英

如何使用Java在Apache Spark中正确制作句子的TF-IDF向量?

[英]How correctly make TF-IDF vectors of sentences in Apache Spark with Java?

我有这段代码,

public class TfIdfExample {
        public static void main(String[] args){
            JavaSparkContext sc = SparkSingleton.getContext();
            SparkSession spark = SparkSession.builder()
                    .config("spark.sql.warehouse.dir", "spark-warehouse")
                    .getOrCreate();
            JavaRDD<List<String>> documents = sc.parallelize(Arrays.asList(
                    Arrays.asList("this is a sentence".split(" ")),
                    Arrays.asList("this is another sentence".split(" ")),
                    Arrays.asList("this is still a sentence".split(" "))), 2);


            HashingTF hashingTF = new HashingTF();
            documents.cache();
            JavaRDD<Vector> featurizedData = hashingTF.transform(documents);
            // alternatively, CountVectorizer can also be used to get term frequency vectors

            IDF idf = new IDF();
            IDFModel idfModel = idf.fit(featurizedData);

            featurizedData.cache();

            JavaRDD<Vector> tfidfs = idfModel.transform(featurizedData);
            System.out.println(tfidfs.collect());
            KMeansProcessor kMeansProcessor = new KMeansProcessor();
            JavaPairRDD<Vector,Integer> result = kMeansProcessor.Process(tfidfs);
            result.collect().forEach(System.out::println);
        }
    }

我需要获得k均值的向量,但我得到的是奇数向量

[(1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),
     (1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),
     (1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0])]

在k均值工作后我明白了

((1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),1)
((1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),0)
((1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),1)
((1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),1)
((1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0]),1)
((1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),0)
((1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),1)
((1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0]),0)
((1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0]),1)

但是我认为它不能正常工作,因为tf-idf必须有另一种观点。 我认为mllib有现成的方法,但是我测试了文档示例,但没有收到我所需要的。 尚未找到Spark的自定义解决方案。 可能有人使用它并让我回答我做错了什么? 可能是我没有正确使用mllib功能吗?

TF-IDF之后得到的是SparseVector

为了更好地理解这些值,让我从TF向量开始:

(1048576,[489554,540177,736740,894973],[1.0,1.0,1.0,1.0])
(1048576,[455491,540177,736740,894973],[1.0,1.0,1.0,1.0])
(1048576,[489554,540177,560488,736740,894973],[1.0,1.0,1.0,1.0,1.0])

例如,对应于第一句话的TF向量是1048576= 2^20 )分量向量,其中4个非零值对应于索引489554,540177,736740894973 ,所有其他值均为零,因此不存储在稀疏向量表示。

特征向量的维数等于您哈希到的存储桶数: 1048576 = 2^20存储桶。

对于这种大小的语料库,您应该考虑减少存储桶的数量:

HashingTF hashingTF = new HashingTF(32);

建议使用2的幂以最大程度地减少哈希冲突的次数。

接下来,应用IDF权重:

(1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0])
(1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0])
(1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0])

如果我们再次看第一句话,我们将得到3个零-这是预料之中的,因为术语“ this”,“ is”和“ sentence”出现在语料库的每个文档中,因此IDF的定义等于零。

为什么零值仍在( 稀疏 )向量中? 因为在当前实现中, 向量的大小保持不变,并且只有值乘以IDF。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM