I am new to Databricks and I am trying to read using R language a temporary table created with Scala. Before importing the table with Scala I stablished a connection to my company database. Here It is the code that I unsuccesfully used. I omitted the connection credentials with "xxxx". I know that I must use the sparklyr package to do this, so assume that I already loaded this package
%scala
val jdbcUsername = "xxxx"
val jdbcPassword = "xxxx"
val jdbcHostname = "xxxx"
val jdbcPort = 9999
val jdbcDatabase ="xxxx"
import java.util.Properties
val jdbc_url = s"xxxx"
val connectionProperties = new Properties()
connectionProperties.put("user", s"${jdbcUsername}")
connectionProperties.put("password", s"${jdbcPassword}")
val task = spark.read.jdbc(jdbc_url, "dbo.task", connectionProperties)
task.createOrReplaceTempView("task_temp")
teste = spark_load_table("task_temp")
The output error:
Error in UseMethod("hive_context") : Error in UseMethod("hive_context") :
no applicable method for 'hive_context' applied to an object of class "character"
I can read using python, like this:
%python
task_teste = spark.table("task_temp")
But as I mentioned, I need to read using R.
I had success today doing this:
sdf_sql("SELECT * FROM task_temp")
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.