簡體   English   中英

Apache Spark無法使用JDBC驅動程序連接到MonetDB群集

[英]Apache Spark fails to connect to MonetDB Cluster using JDBC Driver

我們使用JDBC通過Apache Spark連接到MonetDB群集時遇到問題。 與非集群數據庫的連接有效。 但是,當我們嘗試通過Apache Spark連接到群集的MonetDB數據庫時,它將失敗,並顯示“未處理的結果類型錯誤”。 整個查詢如下

我們已經嘗試過將普通JDBC連接到monetdb集群,並且它可以工作。 只有Spark失敗。

val v1 = hiveContext.load("jdbc",Map("url" -> "jdbc:monetdb://1.1.1.1/tpch1?user=monetdb&password=monetdb","dbtable" -> "(select count(*) from customer)v1"))
**java.sql.SQLException: node */tpch/1/monet returned unhandled result type**

java.sql.SQLException: node */tpch/2/monet returned unhandled result type
        at nl.cwi.monetdb.jdbc.MonetConnection$ResponseList.executeQuery(MonetConnection.java:2536)
        at nl.cwi.monetdb.jdbc.MonetConnection$ResponseList.processQuery(MonetConnection.java:2284)
        at nl.cwi.monetdb.jdbc.MonetStatement.internalExecute(MonetStatement.java:508)
        at nl.cwi.monetdb.jdbc.MonetStatement.execute(MonetStatement.java:349)
        at nl.cwi.monetdb.jdbc.MonetPreparedStatement.<init>(MonetPreparedStatement.java:118)
        at nl.cwi.monetdb.jdbc.MonetConnection.prepareStatement(MonetConnection.java:901)
        at nl.cwi.monetdb.jdbc.MonetConnection.prepareStatement(MonetConnection.java:825)
        at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:96)
        at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:125)
        at org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:114)
        at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:290)
        at org.apache.spark.sql.SQLContext.load(SQLContext.scala:679)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:23)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
        at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
        at $iwC$$iwC$$iwC$$iwC.<init>(<console>:34)
        at $iwC$$iwC$$iwC.<init>(<console>:36)
        at $iwC$$iwC.<init>(<console>:38)
        at $iwC.<init>(<console>:40)
        at <init>(<console>:42)
        at .<init>(<console>:46)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
        at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
        at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

RD 1429856293844:讀取最終塊:67個字節RX 1429856293845:
bwnthqobupI3CPY:merovingian:9:RIPEMD160,SHA256,SHA1,MD5:LIT:SHA512:RD 1429856293845:插入提示TD 1429856293846:寫入最后一個塊:99個字節TX 1429856293846:BIG:merovingian:{SHA256} 7070477fd396595f9293a54a5295b9e54b7929a9a9a9a54e5e1e5e1e3e0e0e0e0e0e0e0e0e0e0e0b0e :讀取最后一塊:0字節RX 1429856293847:RD 1429856293847:插入提示TD 1429856293847:寫入最后一塊:49字節TX 1429856293847:sSET時區間隔'+05:30'HOUR TO MINUTE; RD 1429856293848:讀取最終塊:3個字節RX 1429856293848:&3

RD 1429856293848:插入提示TD 1429856293855:寫入最終塊:15字節TX 1429856293855:讀取最終塊:0字節RX 1429856293855:RD 1429856293855:插入提示TD 1429856293855:寫入最終塊:68字節TX 1429856293855:sPREPARE * FROM(從客戶那里選擇count(*))v1 WHERE 1 = 0; RD 1429856293856:讀取最終塊:52個字節RX 1429856293856:!node * / tpch / 2 / monet返回未處理的結果類型

RD 1429856293856:插入提示

問題不在於JDBC,而在於merovingian漏斗。 似乎該渠道不喜歡語句PREPARE SELECT * FROM (select count(*) from customer)v1 WHERE 1=0 ; 查看是否可以防止應用程序使用准備好的語句。 請隨時在http://bugs.monetdb.org上提交有關此問題的錯誤報告。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM