Now I've got data in spark dataframe, I want to convert back to SQL to do some analysis. Does anyone have any idea how I can do it? like df.to_sql(...)?
Thanks!
您可以使用explain
运算符,请参阅此链接。
Try this:
df.write.option('header','true').saveAsTable("my_sql_table")
You can then query on my_sql_table using SQL.
You can procee DataFrame as SQL using Spark-sql.
val df = Seq(("Edward", 1, 1000,"me1@example.com"),
("Michal",2,15000,"me1@example.com"),
("Steve",3,25000,"you@example.com"),
("Jordan",4,40000, "me1@example.com")).
toDF("Name", "ID", "Salary","MailId")
OR
val df = spark.read.json("examples/src/main/resources/employee.json")
// Displays the content of the DataFrame to stdout
df.show()
+------+---+------+---------------+
| Name| ID|Salary| MailId|
+------+---+------+---------------+
|Edward| 1| 1000|me1@example.com|
|Michal| 2| 15000|me1@example.com|
| Steve| 3| 25000|you@example.com|
|Jordan| 4| 40000|me1@example.com|
+------+---+------+---------------+
This import is needed to use the $-notation
import spark.implicits._
// Print the schema in a tree format
df.printSchema()
// Select only the "name" column
df.select("name").show()
// Select employees whose salary > 15000
df.filter($"Salary" > 15000).show()
Even sql function on a SparkSession enables applications to run SQL queries programmatically and returns the result as a DataFrame.
// Register the DataFrame as a SQL temporary view
df.createOrReplaceTempView("employee")
val sqlDF = spark.sql("SELECT * FROM employee")
sqlDF.show()
+------+---+------+---------------+
| Name| ID|Salary| MailId|
+------+---+------+---------------+
|Edward| 1| 1000|me1@example.com|
|Michal| 2| 15000|me1@example.com|
| Steve| 3| 25000|you@example.com|
|Jordan| 4| 40000|me1@example.com|
+------+---+------+---------------+
Temporary views in Spark SQL are session-scoped and will disappear if the session that creates it terminates. If you want to have a temporary view that is shared among all sessions and keep alive until the Spark application terminates, you can create a global temporary view.
// Register the DataFrame as a global temporary view
df.createGlobalTempView("employee")
// Global temporary view is tied to a system preserved database `global_temp`
spark.sql("SELECT * FROM global_temp.employee").show()
+------+---+------+---------------+
| Name| ID|Salary| MailId|
+------+---+------+---------------+
|Edward| 1| 1000|me1@example.com|
|Michal| 2| 15000|me1@example.com|
| Steve| 3| 25000|you@example.com|
|Jordan| 4| 40000|me1@example.com|
+------+---+------+---------------+
Please refer Spark documentation.
https://spark.apache.org/docs/2.3.0/sql-programming-guide.html
Hope it helps!
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.