简体   繁体   中英

Spark SQL count() returns wrong number

I'm new to Apache Spark and Scala (also a beginner with Hadoop in general). I completed the Spark SQL tutorial: https://spark.apache.org/docs/latest/sql-programming-guide.html I tried to perform a simple query on a standard csv file to benchmark its performance on my current cluster.

I used data from https://s3.amazonaws.com/hw-sandbox/tutorial1/NYSE-2000-2001.tsv.gz , converted it to csv and copy/pasted the data to make it 10 times as big.

I loaded it into Spark using Scala:

// sc is an existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)

// createSchemaRDD is used to implicitly convert an RDD to a SchemaRDD.
import sqlContext.createSchemaRDD

Define classes:

case class datum(exchange: String,stock_symbol: String,date: String,stock_price_open: Double,stock_price_high: Double,stock_price_low: Double,stock_price_close: Double,stock_volume: String,stock_price_adj_close: Double)

Read in data:

val data = sc.textFile("input.csv").map(_.split(";")).filter(line => "exchange" != "exchange").map(p => datum(p(0).trim.toString, p(1).trim.toString, p(2).trim.toString, p(3).trim.toDouble, p(4).trim.toDouble, p(5).trim.toDouble, p(6).trim.toDouble, p(7).trim.toString, p(8).trim.toDouble))

Convert to table:

data.registerAsTable("data")

Define query (list all rows with 'IBM' as stock symbol):

val IBMs = sqlContext.sql("SELECT * FROM data WHERE stock_symbol ='IBM'")

Perform count so query actually runs:

IBMs.count()

The query runs fine, but returns res: 0 instead of 5000 (which is what it returns using Hive with MapReduce).

filter(line => "exchange" != "exchange")

由于“ exchange”等于“ exchange”,因此过滤器将返回大小为0的集合。并且由于没有数据,因此查询任何结果将返回0。您需要重新编写逻辑。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM