繁体   English   中英

如何在pyspark中将变量传递给spark.sql查询?

[英]How to pass variables to spark.sql query in pyspark?

如何在pyspark中将变量传递给spark.sql查询? 当我查询表时,它失败,并显示AnalysisException 为什么?

>>> spark.sql("select * from student").show()

+-------+--------+
|roll_no|    name|
+-------+--------+
|      1|ravindra|
+-------+--------+

>>> spark.sql("select * from student where roll_no={0} and name={1}".format(id,name)).show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/pyspark/sql/session.py", line 767, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"cannot resolve '`ravindra`' given input columns: [default.student.id, default.student.roll_no, default.student.name]; line 1 pos 47;\n'Project [*]\n+- 'Filter ((roll_no#21 = 0) && (name#22 = 'ravindra))\n   +- SubqueryAlias `default`.`student`\n      +- HiveTableRelation `default`.`student`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [id#20, roll_no#21, name#22]\n"

我通常在sql字符串中使用%s字符串格式化程序

sqlc.sql('select * from students where roll_no=%s and name="%s"' % ('1', 'ravindra')).show()

查看您的sql回溯,当将ravindra传递给sql字符串时,您肯定错过了name=值的引号,并且sql引擎将其视为变量调用。

您的SQL查询然后变成

select * from students where roll_no=1 and name=ravindra  -- no quotes

您可以将您的sql字符串调整为

spark.sql("select * from student where roll_no={0} and name='{1}'".format(id,name)).show()

引用您的{1}以得到所需的结果。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM