简体   繁体   中英

Unable to execute nested SQL queries in Spark SQL

I am trying to execute this query but it doesn't work:

SELECT COLUMN
FROM TABLE A           
WHERE  A.COLUM_1 = '9999-12-31' AND NOT EXISTS (SELECT 1 FROM TABLE2 ET WHERE ET.COl1 = A.COL2 LIMIT 1)

It results in an error which says the following:

"mismatched input FROM expecting"

Went through this post as it states its supported by Spark with 2.0+ version .

I'm not sure that SparkSQL supports TOP . But it is not needed. Does this work?

SELECT t.COLUMN
FROM TABLE t           
WHERE t.COLUM_1 = '9999-12-31' AND
      NOT EXISTS (SELECT 1 FROM TABLE2 ET WHERE ET.COl1 = t.COL2);

This fixes a few other syntax issues with the query (such as no alias A ).

LIMIT in the subquery is also not needed. NOT EXISTS should stop at the first match.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM