简体   繁体   English

SQL 数据库使用 JDBC + 参数化 SQL 查询 + Databricks

[英]SQL Database using JDBC + parameterize SQL Query + Databricks

In Databricks am reading SQL table as在 Databricks 中,我将 SQL 表读取为

val TransformationRules = spark.read.jdbc(jdbcUrl, "ADF.TransformationRules", connectionProperties)
.select("RuleCode","SourceSystem","PrimaryTable", "PrimaryColumn", "SecondaryColumn", "NewColumnName","CurrentFlag")
.where("SourceSystem = 'QWDS' AND RuleCode = 'STD00003' ")

How can I parameterize SourceSystem and RuleCode in Where clause如何在Where子句中参数化SourceSystemRuleCode

Was referring to: https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/sql-databases指的是: https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/sql-databases

if you import the spark implicits, you can create references to columns with the dollar $ interpolator.如果您导入 spark 隐式,您可以使用美元$插值器创建对列的引用。 Also, you can use the API with columns to make the logic, it will be something like this.此外,您可以使用带有列的 API 来制作逻辑,它会是这样的。

val sourceSystem = "QWDS"
val ruleCode = "STD00003"

import spark.implicits._
val TransformationRules = spark.read.jdbc(jdbcUrl, "ADF.TransformationRules", connectionProperties)
.select("RuleCode","SourceSystem","PrimaryTable", "PrimaryColumn", "SecondaryColumn", "NewColumnName","CurrentFlag")
.where($"SourceSystem" === sourceSystem && $"RuleCode" === ruleCode)

val ssColumn: Column = $"SourceSystem"

As you can see, the dollar will provide a Column object, with logic like cooperation, casting renaming etc. In combination with the functions in org.apache.spark.sql.function will allow you to implement almost all you need. As you can see, the dollar will provide a Column object, with logic like cooperation, casting renaming etc. In combination with the functions in org.apache.spark.sql.function will allow you to implement almost all you need.

As far as i unterstand your question correctly, you want to insert values into the where clause string?据我正确理解您的问题,您想将值插入到 where 子句字符串中吗? Maybe then the underneath solutions could work for you:也许下面的解决方案可能对您有用:

val TransformationRules = spark.read.jdbc(jdbcUrl, "ADF.TransformationRules", connectionProperties)
.select("RuleCode","SourceSystem","PrimaryTable", "PrimaryColumn", "SecondaryColumn", "NewColumnName","CurrentFlag")
.where("SourceSystem = '{}' AND RuleCode = '{}' ".format(sourceSystem, ruleCode))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM