简体   繁体   English

where子句不在spark sql数据帧中工作

[英]where clause not working in spark sql dataframe

I've created a dataframe which contains 3 columns : zip, lat, lng 我创建了一个包含3列的数据框:zip,lat,lng

I want to select the lat and lng values where zip = 00650 我想选择zip = 00650的lat和lng值

So, I tried using : 所以,我尝试使用:

sqlContext.sql("select lat,lng from census where zip=00650").show()

But it is returning ArrayOutOfBound Exception because it does not have any values in it. 但它返回ArrayOutOfBound异常,因为它没有任何值。 If I remove the where clause it is running fine. 如果我删除where子句它运行正常。

Can someone please explain what I am doing wrong ? 有人可以解释一下我做错了什么吗?

Update: 更新:

dataframe schema: 数据帧架构:

root 
|-- zip: string (nullable = true) 
|-- lat: string (nullable = true) 
|-- lng: string (nullable = true)

First 10 rows are : 前10行是:

+-----+---------+-----------+
|  zip|      lat|        lng|
+-----+---------+-----------+
|00601|18.180555| -66.749961|
|00602|18.361945| -67.175597|
|00603|18.455183| -67.119887|
|00606|18.158345| -66.932911|
|00610|18.295366| -67.125135|
|00612|18.402253| -66.711397|
|00616|18.420412| -66.671979|
|00617|18.445147| -66.559696|
|00622|17.991245| -67.153993|
|00623|18.083361| -67.153897|
|00624|18.064919| -66.716683|
|00627|18.412600| -66.863926|
|00631|18.190607| -66.832041|
|00637|18.076713| -66.947389|
|00638|18.295913| -66.515588|
|00641|18.263085| -66.712985|
|00646|18.433150| -66.285875| 
|00647|17.963613| -66.947127|
|00650|18.349416| -66.578079|

As you can see in your schema zip is of type String , so your query should be something like this 正如您在架构中看到的那样, zipString类型,因此您的查询应该是这样的

sqlContext.sql("select lat, lng from census where zip = '00650'").show()

Update: 更新:

If you are using Spark 2 then you can do this: 如果您使用Spark 2那么您可以这样做:

import sparkSession.sqlContext.implicits._

val dataFrame = Seq(("10.023", "75.0125", "00650"),("12.0246", "76.4586", "00650"), ("10.023", "75.0125", "00651")).toDF("lat","lng", "zip")

dataFrame.printSchema()

dataFrame.select("*").where(dataFrame("zip") === "00650").show()

dataFrame.registerTempTable("census")

sparkSession.sqlContext.sql("SELECT lat, lng FROM census WHERE zip = '00650'").show()

output: 输出:

root
 |-- lat: string (nullable = true)
 |-- lng: string (nullable = true)
 |-- zip: string (nullable = true)

+-------+-------+-----+
|    lat|    lng|  zip|
+-------+-------+-----+
| 10.023|75.0125|00650|
|12.0246|76.4586|00650|
+-------+-------+-----+

+-------+-------+
|    lat|    lng|
+-------+-------+
| 10.023|75.0125|
|12.0246|76.4586|
+-------+-------+

I resolved my issue using RDD rather that DataFrame. 我使用RDD而不是DataFrame解决了我的问题。 It provided me desired results : 它给了我期望的结果:

val data = sc.textFile("/home/ishan/Desktop/c").map(_.split(","))
val arr=data.filter(_.contains("00650")).take(1)
arr.foreach{a => a foreach println}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM