简体   繁体   English

df.write.jdbc是否处理JDBC池连接?

[英]Does df.write.jdbc handle JDBC pool connection?

Do you know if the following line can handle jdbc pool connection: 您是否知道以下行是否可以处理jdbc池连接:

df.write
  .mode("append")
  .jdbc(url, table, prop)

Do you have any idea? 你有什么主意吗? Thanks 谢谢

I don't think so. 我不这么认为

spark.read.jdbc requests Spark SQL to create a JDBCRelation . spark.read.jdbc请求Spark SQL创建JDBCRelation Eventually buildScan is executed that in turn calls JDBCRDD.scanTable that leads to JdbcUtils.createConnectionFactory(options) for JDBCRDD . 最终buildScan被执行依次调用JDBCRDD.scanTable导致JdbcUtils.createConnectionFactory(期权)JDBCRDD

With that, you see driver.connect(options.url, options.asConnectionProperties) and unless driver deals with connection pooling Spark SQL does not do it. 这样,您将看到driver.connect(options.url,options.asConnectionProperties) ,除非driver处理连接池,否则Spark SQL不会执行。

(just noticed that you asked another question) (只是注意到您问了另一个问题)

df.write.jdbc is similar. df.write.jdbc与此类似。 It leads to JDBCRelation again that uses the same RDD. 它再次导致使用相同RDD的JDBCRelation

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM