简体   繁体   English

如何在 Spark BigQuery 连接器 (Scala) 中使用 IN 子句

[英]How to use IN clause with Spark BigQuery Connector (Scala)

I'm unable to use In clause with Spark-BigQuery connector (Scala).我无法将 In 子句与 Spark-BigQuery 连接器 (Scala) 一起使用。 I'm getting below exception:我遇到了以下异常:

{
  "code" : 400,
  "errors" : [ {
    "domain" : "global",
    "location" : "q",
    "locationType" : "parameter",
    "message" : "No matching signature for operator IN UNNEST for argument types: NUMERIC, ARRAY<FLOAT64> at [1:100]",
    "reason" : "invalidQuery"
  } ],
  "message" : "No matching signature for operator IN UNNEST for argument types: NUMERIC, ARRAY<FLOAT64> at [1:100]",
  "status" : "INVALID_ARGUMENT"
}

Below is my code snippet,下面是我的代码片段,

session.read
    .format("com.google.cloud.spark.bigquery")
    .load("my_dataset.department_table")
    .select("DEPT_ORG_ID", "DEPT_NAME")
    .where("DEPT_ORG_ID In (511324, 511322)")

DEPT_ORG_ID is of type numeric. DEPT_ORG_ID 是数字类型。

Spark version: 2.3.1火花版本:2.3.1

I figured out the solution.我想出了解决方案。 In clause should be passed in option() as follows, in 子句应按如下方式在option()中传递,

session.read
    .option("filter","DEPT_ORG_ID In (511324, 511322)")
    .format("com.google.cloud.spark.bigquery")
    .load("my_dataset.department_table")
    .select("DEPT_ORG_ID", "DEPT_NAME")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM