简体   繁体   中英

The i/p col features must be either string or numeric type, but got org.apache.spark.ml.linalg.VectorUDT

I am very new to Spark Machine Learning just an 3 day old novice and I'm basically trying to predict some data using Logistic Regression algorithm in spark via Java. I have referred few sites and documentation and came up with the code and i am trying to execute it but facing an issue. So i have pre-processed the data and have used vector assembler to club all the relevant columns into one and i am trying to fit the model and facing an issue.

public class Sparkdemo {

static SparkSession session = SparkSession.builder().appName("spark_demo")
        .master("local[*]").getOrCreate();

@SuppressWarnings("empty-statement")
public static void getData() {
    Dataset<Row> inputFile = session.read()
            .option("header", true)
            .format("csv")
            .option("inferschema", true)
            .csv("C:\\Users\\WildJasmine\\Downloads\\NKI_cleaned.csv");
    inputFile.show();
    String[] columns = inputFile.columns();
    int beg = 16, end = columns.length - 1;
    String[] featuresToDrop = new String[end - beg + 1];
    System.arraycopy(columns, beg, featuresToDrop, 0, featuresToDrop.length);
    System.out.println("rows are\n " + Arrays.toString(featuresToDrop));
    Dataset<Row> dataSubset = inputFile.drop(featuresToDrop);
    String[] arr = {"Patient", "ID", "eventdeath"};
    Dataset<Row> X = dataSubset.drop(arr);
    X.show();
    Dataset<Row> y = dataSubset.select("eventdeath");
    y.show();

    //Vector Assembler concept for merging all the cols into a single col
    VectorAssembler assembler = new VectorAssembler()
            .setInputCols(X.columns())
            .setOutputCol("features");

    Dataset<Row> dataset = assembler.transform(X);
    dataset.show();

    StringIndexer labelSplit = new StringIndexer().setInputCol("features").setOutputCol("label");
    Dataset<Row> data = labelSplit.fit(dataset)
            .transform(dataset);
    data.show();

    Dataset<Row>[] splitsX = data.randomSplit(new double[]{0.8, 0.2}, 42);
    Dataset<Row> trainingX = splitsX[0];
    Dataset<Row> testX = splitsX[1];

    LogisticRegression lr = new LogisticRegression()
            .setMaxIter(10)
            .setRegParam(0.3)
            .setElasticNetParam(0.8);

    LogisticRegressionModel lrModel = lr.fit(trainingX);
    Dataset<Row> prediction = lrModel.transform(testX);
    prediction.show();

}

public static void main(String[] args) {
    getData();

}}

Below image is my dataset,

dataset

Error message:

Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: The input column features must be either string type or numeric type, but got org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7.
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.ml.feature.StringIndexerBase$class.validateAndTransformSchema(StringIndexer.scala:86)
at org.apache.spark.ml.feature.StringIndexer.validateAndTransformSchema(StringIndexer.scala:109)
at org.apache.spark.ml.feature.StringIndexer.transformSchema(StringIndexer.scala:152)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:135)

My end result is I need a predicted value using the features column.

Thanks in advance.

That error occurs when the input field of your dataframe for which you want to apply the StringIndexer transformation is a Vector. In the Spark documentation https://spark.apache.org/docs/latest/ml-features#stringindexer you can see that the input column is a string. This transformer performs a distinct to that column and creates a new column with integers that correspond to each different string value. It does not work for vectors.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM