简体   繁体   English

hdfs中的Java加载序列文件为JavaRDD <Vector>

[英]Java Load sequence file from hdfs as JavaRDD<Vector>

I have the following method to write file to HDFS 我有以下方法将文件写入HDFS

public void writePointsToFile(Path path, FileSystem fs, Configuration conf,
        List<Vector> points) throws IOException {

    SequenceFile.Writer writer = SequenceFile.createWriter(conf,
            Writer.file(path), Writer.keyClass(LongWritable.class),
            Writer.valueClass(Vector.class));

    long recNum = 0;

    for (Vector point : points) {
        writer.append(new LongWritable(recNum++), point);
    }
    writer.close();
}

I need to know how to read this file as JavaRDD<Vector> to be used in Spark Clustering K-mean ? 我需要知道如何以JavaRDD<Vector>读取此文件以在Spark Clustering K-mean

The typical pattern with Spark is to transform immutable objects into new ones so transforming a DRM (Mahout distributed row matrix) or collections of Mahout Vectors is the way you should do this. Spark的典型模式是将不可变的对象转换为新的对象,因此转换DRM(Mahout分布式行矩阵)或Mahout Vectors集合是您应该这样做的方式。 So not sure what you are asking. 所以不确定您要问什么。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM