![](/img/trans.png)
[英]How to handle white spaces in varchar not null column from azure synapse table to spark databricks
[英]How to handle white spaces in dataframe column names in spark
我从 df 注册了一个 tmp 表,该表在 header 列中有空格。如何在通过 sqlContext 使用 sql 查询时提取该列。 我尝试使用反引号,但它不起作用
df1 = sqlContext.sql("""select Company, Sector, Industry, `Altman Z-score as Z_Score` from tmp1 """)
您只需将列名放在后面的刻度中,而不是它的别名:
没有别名 :
df1 = sqlContext.sql("""select Company, Sector, Industry, `Altman Z-score` as Z_Score from tmp1""")
使用别名 :
df1 = sqlContext.sql("""select t1.Company, t1.Sector, t1.Industry, t1.`Altman Z-score` as Z_Score from tmp1 t1""")
查询中存在问题,更正后的查询如下( 在``中包装为Z_Score ): -
df1 = sqlContext.sql("""select Company, Sector, Industry, `Altman Z-score` as Z_Score from tmp1 """)
还有一个替代: -
import pyspark.sql.functions as F
df1 = sqlContext.sql("""select * from tmp1 """)
df1.select(F.col("Altman Z-score").alias("Z_Score")).show()
https://www.tutorialspoint.com/how-to-select-a-column-name-with-spaces-in-mysql
请参考上面的链接,使用 ` 符号作为 Tilda ~ 的切换键来引用带空格的列。 我已经尝试了下面的代码并且它有效
data = spark.read.options(header='True',inferschema='True',delimiter=',').csv(r'C:\Users\user\OneDrive\Desktop\diabetes.csv')
data.createOrReplaceTempView("DIABETICDATA")
spark.sql("""SELECT `Number of times pregnant` FROM DIABETICDATA WHERE `Number of times pregnant` > 10 """).show()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.