简体   繁体   English

当DataFrame有列时如何使用Java Apache Spark MLlib?

[英]How to work with Java Apache Spark MLlib when DataFrame has columns?

So I'm new to Apache Spark and I have a file that looks like this: 所以我是Apache Spark的新手,并且我有一个看起来像这样的文件:

Name     Size    Records 
File1    1,000   104,370 
File2    950     91,780 
File3    1,500   109,123 
File4    2,170   113,888
File5    2,000   111,974
File6    1,820   110,666
File7    1,200   106,771 
File8    1,500   108,991 
File9    1,000   104,007
File10   1,300   107,037
File11   1,900   111,109
File12   1,430   108,051
File13   1,780   110,006
File14   2,010   114,449
File15   2,017   114,889

This is my sample/test data. 这是我的样品/测试数据。 I'm working on an anomaly detection program and I have to test other files with the same format but different values and detect which one have anomalies on the size and records values (if size/records on another file differ a lot from the standard one, or if size and records are not proportional within each other). 我正在开发一个异常检测程序,我必须测试具有相同格式但值不同的其他文件,并检测哪个文件的大小异常并记录值(如果另一个文件的大小/记录与标准文件相差很大) ,或者大小和记录之间不成比例)。 I decided to start trying different ML algorithms and I wanted to start with the k-Means approach. 我决定开始尝试不同的ML算法,我想从k-Means方法开始。 I tried putting this file on the following line: 我尝试将此文件放在以下行中:

KMeansModel model = kmeans.fit(file)

file is already parsed to a Dataset variable. 文件已被解析为数据集变量。 However I get an error and I'm pretty sure it has to do with the structure/schema of the file. 但是我得到一个错误,并且我很确定这与文件的结构/模式有关。 Is there a way to work with structured/labeled/organized data when trying to fit in on a model? 尝试适应模型时,是否可以使用结构化/标签化/组织化数据?

I get the following error: Exception in thread "main" java.lang.IllegalArgumentException: Field "features" does not exist. 我收到以下错误:线程“主”中的异常java.lang.IllegalArgumentException:字段“功能”不存在。

And this is the code: 这是代码:

public class practice {

public static void main(String[] args) {
    SparkConf conf = new SparkConf().setAppName("Anomaly Detection").setMaster("local");
    JavaSparkContext sc = new JavaSparkContext(conf);

    SparkSession spark = SparkSession
              .builder()
              .appName("Anomaly Detection")
              .getOrCreate();

String day1 = "C:\\Users\\ZK0GJXO\\Documents\\day1.txt";

    Dataset<Row> df = spark.read().
            option("header", "true").
            option("delimiter", "\t").
            csv(day1);
    df.show();
    KMeans kmeans = new KMeans().setK(2).setSeed(1L);
    KMeansModel model = kmeans.fit(df);
}

} }

Thanks 谢谢

By default all Spark ML models train on a column called "features". 默认情况下,所有Spark ML模型都在称为“功能”的列上训练。 One can specify a different input column name via the setFeaturesCol method http://spark.apache.org/docs/latest/api/java/org/apache/spark/ml/clustering/KMeans.html#setFeaturesCol(java.lang.String) 可以通过setFeaturesCol方法http://spark.apache.org/docs/latest/api/java/org/apache/spark/ml/clustering/KMeans.html#setFeaturesCol(java.lang。串)

update: 更新:

One can combine multiple columns into a single feature vector using VectorAssembler: 可以使用VectorAssembler将多列合并为一个特征向量:

VectorAssembler assembler = new VectorAssembler()
.setInputCols(new String[]{"size", "records"})
.setOutputCol("features");

 Dataset<Row> vectorized_df = assembler.transform(df)

 KMeans kmeans = new KMeans().setK(2).setSeed(1L);
 KMeansModel model = kmeans.fit(vectorized_df);

One can further streamline and chain these feature transformations with the pipeline API https://spark.apache.org/docs/latest/ml-pipeline.html#example-pipeline 可以使用管道API https://spark.apache.org/docs/latest/ml-pipeline.html#example-pipeline进一步简化和链接这些功能转换

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM