简体   繁体   English

使用pyspark load读取数据时出错

[英]Error when using pyspark load to read data

I am trying to load a file using Pyspark as below我正在尝试使用 Pyspark 加载文件,如下所示

from pyspark.sql import SparkSession

spark = SparkSession.builder.appName('mylogreg').getOrCreate()

from pyspark.ml.classification import LogisticRegression

my_data = spark.read.format('libsvm').load('cars.csv')

but it keeps giving me the following error:但它不断给我以下错误:

Py4JJavaError: An error occurred while calling o231.load.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in stage 6.0 (TID 6, localhost, executor driver): java.lang.NumberFormatException: For input string: "YEAR,Make,Model,Size,(kW),Unnamed:"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
    at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
    at java.lang.Double.parseDouble(Double.java:538)
    at scala.collection.immutable.StringLike$class.toDouble(StringLike.scala:284)
    at scala.collection.immutable.StringOps.toDouble(StringOps.scala:29)
    at org.apache.spark.mllib.util.MLUtils$.parseLibSVMRecord(MLUtils.scala:128)
    at org.apache.spark.mllib.util.MLUtils$$anonfun$parseLibSVMFile$4.apply(MLUtils.scala:123)
    at org.apache.spark.mllib.util.MLUtils$$anonfun$parseLibSVMFile$4.apply(MLUtils.scala:123)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:185)
    at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1015)
    at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1013)
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2123)
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2123)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.ap

I could use normal RDD instead of using SQLContext, but I won't be able to view data nicely in a table.我可以使用普通的 RDD 而不是使用 SQLContext,但我将无法很好地查看表中的数据。

我认为你应该以.csv格式加载

my_data = spark.read.option("delimiter", ",").option("header", "false").csv('cars.csv')

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM