繁体   English   中英

R DBI Sparklyr DBWritetable运行无结果

[英]R DBI Sparklyr DBWritetable running with no result

从MS-SQL环境进入具有火花访问权限的HIVE环境。 正确地尝试使用RStudio和R(有时使用rPython进行python)替换一些我以前使用过T-SQL的东西,以及许多以前从未做过的事情。

为了使它起作用,我将需要能够读写HIVE DB。

我已经使用spark和R包sparklyr进行了连接,并且可以使用带有spark连接的R包DBI连接到我们的HIVE集群,并且可以将数据提取到R数据帧中:

sc <- spark_connect(master = "yarn-client", spark_home="/usr/hdp/current/spark-client", config = config)
result3 <- dbGetQuery(sc, "select * from sampledb.sampletable limit 100")

上面的代码始终有效。 我还可以使用dbGetQuery在带引号的sql语句的上下文中在数据库内创建表,因此不会出现写权限问题。

但是,当我尝试将数据从R帧写回HIVE群集时,如下所示:

dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)

它运行无错误,但表未显示,我无法查询它。

如果尝试再次写表,则会出现此错误:

> dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)
Error in .local(conn, name, value, ...) : 
Table sampledb.rsparktest3 already exists

有什么想法会发生什么吗? 除了DBI之外,还有其他更好的方法吗?

在此先感谢您的帮助!

下面是运行这些语句时的整个RStudio控制台日志:

> result3 <- dbGetQuery(sc, "select * from sampledb.sampletable limit 100")
> dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)
> result3y <- dbGetQuery(sc, "select * from sampledb.rsparktest3 limit 2")
Error: org.apache.spark.sql.AnalysisException: Table not found: sampledb.rsparktest3; line 1 pos 35
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:54)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:121)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sparklyr.Invoke$.invoke(invoke.scala:102)
at sparklyr.StreamHandler$.handleMethodCall(stream.scala:97)
at sparklyr.StreamHandler$.read(stream.scala:62)
at sparklyr.BackendHandler.channelRead0(handler.scala:52)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
> dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)
Error in .local(conn, name, value, ...) : 
Table sampledb.rsparktest3 already exists

与sparklyr连接使用spark_write_table而不是dbWriteTable来写回Hive

使用Sparklyr将Spark表写入配置单元:

加载本地数据框以触发:

iris_spark_table <- copy_to(sc, iris, overwrite = TRUE)
sdf_copy_to(sc, iris_spark_table)

在配置单元中创建表(如有必要,添加数据库名称):

DBI::dbGetQuery(sc, "create table iris_hive as SELECT * FROM iris_spark_table")

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM