简体   繁体   English

如何在pyspark.sql中创建一个表作为select

[英]How to create a table as select in pyspark.sql

Is it possible to create a table on spark using a select statement? 是否可以使用select语句在spark上创建表?

I do the following 我做了以下事情

import findspark
findspark.init()
import pyspark
from pyspark.sql import SQLContext

sc = pyspark.SparkContext()
sqlCtx = SQLContext(sc)

spark_df = sqlCtx.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load("./data/documents_topics.csv")
spark_df.registerTempTable("my_table")

sqlCtx.sql("CREATE TABLE my_table_2 AS SELECT * from my_table")

but I get the error 但我得到了错误

/Users/user/anaconda/bin/python /Users/user/workspace/Outbrain-Click-Prediction/test.py Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Setting default log level to "WARN". / Users / user / anaconda / bin / python /Users/user/workspace/Outbrain-Click-Prediction/test.py使用Spark的默认log4j配置文件:org / apache / spark / log4j-defaults.properties将默认日志级别设置为“WARN” ”。 To adjust logging level use sc.setLogLevel(newLevel). 要调整日志记录级别,请使用sc.setLogLevel(newLevel)。 17/01/21 17:19:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Traceback (most recent call last): File "/Users/user/spark-2.0.2-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco return f(*a, **kw) File "/Users/user/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o19.sql. 17/01/21 17:19:43 WARN NativeCodeLoader:无法为您的平台加载native-hadoop库...使用适用的builtin-java类Traceback(最近一次调用最后一次):File“/ Users / user / spark- 2.0.2-bin-hadoop2.7 / python / pyspark / sql / utils.py“,第63行,在deco返回f(* a,** kw)文件”/Users/user/spark-2.0.2-bin -hadoop2.7 / python / lib / py4j-0.10.3-src.zip / py4j / protocol.py“,第319行,在get_return_value中py4j.protocol.Py4JJavaError:调用o19.sql时发生错误。 : org.apache.spark.sql.AnalysisException: unresolved operator 'CreateHiveTableAsSelectLogicalPlan CatalogTable( Table: my_table_2 Created: Sat Jan 21 17:19:53 EST 2017 Last Access: Wed Dec 31 18:59:59 EST 1969 Type: MANAGED Storage(InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)), false;; :org.apache.spark.sql.AnalysisException:未解析的运算符'CreateHiveTableAsSelectLogicalPlan CatalogTable(表: my_table_2创建时间:星期六1月21日17:19:53 2017年最后访问时间:Wed Dec 31 18:59:59 EST 1969类型:MANAGED Storage( InputFormat:org.apache.hadoop.mapred.TextInputFormat,OutputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)),false ;; 'CreateHiveTableAsSelectLogicalPlan CatalogTable( Table: my_table_2 Created: Sat Jan 21 17:19:53 EST 2017 Last Access: Wed Dec 31 18:59:59 EST 1969 Type: MANAGED Storage(InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)), false : +- Project [document_id#0, topic_id#1, confidence_level#2] : +- SubqueryAlias my_table : +- Relation[document_id#0,topic_id#1,confidence_level#2] csv 'CreateHiveTableAsSelectLogicalPlan CatalogTable(表: my_table_2创建时间:周一1月21日17:19:53 2017年最后一次访问:Wed Dec 31 18:59:59 EST 1969类型:MANAGED存储(输入格式:org.apache.hadoop.mapred.TextInputFormat,OutputFormat :org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)),false:+ - Project [document_id#0,topic_id#1,confidence_level#2]:+ - SubqueryAlias my_table:+ - Relation [document_id#0,topic_id #1,confidence_level#2] csv

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:40) at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:58) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:374) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:67) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:67) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:58) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMetho org.apache.spark.sql.catalyst.analysis.CheckAnalysis $ class.failAnalysis(CheckAnalysis.scala:40)org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:58)at org .apache.spark.sql.catalyst.analysis.CheckAnalysis $$ anonfun $ checkAnalysis $ 1.适用(CheckAnalysis.scala:374)在org.apache.spark.sql.catalyst.analysis.CheckAnalysis $$ anonfun $ checkAnalysis $ 1.适用( CheckAnalysis.scala:67)org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126)at org.apache.spark.sql.catalyst.analysis.CheckAnalysis $ class.checkAnalysis(CheckAnalysis。 scala:67)org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:58)atg.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)at org位于org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)的sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method sun.reflect.NativeMethodAccessorImpl.invoke(NativeMetho dAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:745) dAccessorImpl.java:62)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)py4j.reflection.MethodInvoker.invoke(MethodInvoker.java) :237)在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)在py4j.Gateway.invoke(Gateway.java:280)在py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)在py4j.commands .CallCommand.execute(CallCommand.java:79)at py4j.GatewayConnection.run(GatewayConnection.java:214)at java.lang.Thread.run(Thread.java:745)

During handling of the above exception, another exception occurred: 在处理上述异常期间,发生了另一个异常:

Traceback (most recent call last): File "/Users/user/workspace/Outbrain-Click-Prediction/test.py", line 16, in sqlCtx.sql("CREATE TABLE my_table_2 AS SELECT * from my_table") File "/Users/user/spark-2.0.2-bin-hadoop2.7/python/pyspark/sql/context.py", line 360, in sql return self.sparkSession.sql(sqlQuery) File "/Users/user/spark-2.0.2-bin-hadoop2.7/python/pyspark/sql/session.py", line 543, in sql return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped) File "/Users/user/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in call File "/Users/user/spark-2.0.2-bin-hadoop2.7/python/pyspark/sql/utils.py", line 69, in deco raise AnalysisException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.AnalysisException: "unresolved operator 'CreateHiveTableAsSelectLogicalPlan CatalogTable(\\n\\tTable: my_table_2 \\n\\tCreated: Sat Jan 21 17:19:53 EST 2017\\n\\tLast Access: Wed Dec 31 18:59:59 EST 1969\\n\\tType: MANAGED\\n\\tStorage(InputFormat: org.apache. 回溯(最近一次调用最后一次):文件“/Users/user/workspace/Outbrain-Click-Prediction/test.py”,第16行,在sqlCtx.sql中(“CREATE TABLE my_table_2 AS SELECT * from my_table”)文件“/ Users / user / spark-2.0.2-bin-hadoop2.7 / python / pyspark / sql / context.py“,第360行,在sql中返回self.sparkSession.sql(sqlQuery)文件”/ Users / user / spark- 2.0.2-bin-hadoop2.7 / python / pyspark / sql / session.py“,第543行,在sql中返回DataFrame(self._jsparkSession.sql(sqlQuery),self._wrapped)文件”/ Users / user / spark -2.0.2-bin-hadoop2.7 / python / lib / py4j-0.10.3-src.zip / py4j / java_gateway.py“,第1133行,在调用文件”/Users/user/spark-2.0.2- bin-hadoop2.7 / python / pyspark / sql / utils.py“,第69行,在deco中引发AnalysisException(s.split(':',1)[1],stackTrace)pyspark.sql.utils.AnalysisException:”未解析的运算符'CreateHiveTableAsSelectLogicalPlan CatalogTable(\\ n \\ tTable: my_table_2 \\ n \\ tCreated:Sat Jan 21 17:19:53 2017 2017 \\ n \\ t最后访问:Wed Dec 31 18:59:59 EST 1969 \\ n \\ t类型:MANAGED \\ n \\ tStorage(InputFormat:org.apache。 hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)), false;;\\n'CreateHiveTableAsSelectLogicalPlan CatalogTable(\\n\\tTable: my_table_2 \\n\\tCreated: Sat Jan 21 17:19:53 EST 2017\\n\\tLast Access: Wed Dec 31 18:59:59 EST 1969\\n\\tType: MANAGED\\n\\tStorage(InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)), false\\n: +- Project [document_id#0, topic_id#1, confidence_level#2]\\n: +- SubqueryAlias my_table\\n: +- Relation[document_id#0,topic_id#1,confidence_level#2] csv\\n" hadoop.mapred.TextInputFormat,OutputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat)),false ;; \\ n'CateateHiveTableAsSelectLogicalPlan CatalogTable(\\ n \\ tTable: my_table_2 \\ n \\ tCreated:Sat Jan 21 17:19 :53 EST 2017 \\ n \\ t最后访问:Wed Dec 31 18:59:59 EST 1969 \\ n \\ tType:MANAGED \\ n \\ tStorage(InputFormat:org.apache.hadoop.mapred.TextInputFormat,OutputFormat:org.apache.hadoop .hive.ql.io.HiveIgnoreKeyTextOutputFormat)),false \\ n:+ - Project [document_id#0,topic_id#1,confidence_level#2] \\ n:+ - SubqueryAlias my_table \\ n:+ - Relation [document_id#0,topic_id #1,confidence_level#2] csv \\ n“

I've corrected this issue by using HiveContext instead of SQLContext as below: 我已经通过使用HiveContext而不是SQLContext更正了这个问题,如下所示:

import findspark
findspark.init()
import pyspark
from pyspark.sql import HiveContext

sqlCtx= HiveContext(sc)

spark_df = sqlCtx.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load("./data/documents_topics.csv")
spark_df.registerTempTable("my_table")

sqlCtx.sql("CREATE TABLE my_table_2 AS SELECT * from my_table")

您应首先进行选择并将其分配给数据帧变量,然后将其与registerTempTable一起registerTempTable就像使用CSV文件创建的数据帧一样

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM