I want to create a hive table using my Spark dataframe's schema. How can I do that?
For fixed columns, I can use:
val CreateTable_query = "Create Table my table(a string, b string, c double)"
sparksession.sql(CreateTable_query)
But I have many columns in my dataframe, so is there a way to automatically generate such query?
Assuming, you are using Spark 2.1.0 or later and my_DF is your dataframe,
//get the schema split as string with comma-separated field-datatype pairs
StructType my_schema = my_DF.schema();
String columns = Arrays.stream(my_schema.fields())
.map(field -> field.name()+" "+field.dataType().typeName())
.collect(Collectors.joining(","));
//drop the table if already created
spark.sql("drop table if exists my_table");
//create the table using the dataframe schema
spark.sql("create table my_table(" + columns + ")
row format delimited fields terminated by '|' location '/my/hdfs/location'");
//write the dataframe data to the hdfs location for the created Hive table
my_DF.write()
.format("com.databricks.spark.csv")
.option("delimiter","|")
.mode("overwrite")
.save("/my/hdfs/location");
The other method using temp table
my_DF.createOrReplaceTempView("my_temp_table");
spark.sql("drop table if exists my_table");
spark.sql("create table my_table as select * from my_temp_table");
As per your question it looks like you want to create table in hive using your data-frame's schema. But as you are saying you have many columns in that data-frame so there are two options
Consider this code:
package hive.example
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession
object checkDFSchema extends App {
val cc = new SparkConf;
val sc = new SparkContext(cc)
val sparkSession = SparkSession.builder().enableHiveSupport().getOrCreate()
//First option for creating hive table through dataframe
val DF = sparkSession.sql("select * from salary")
DF.createOrReplaceTempView("tempTable")
sparkSession.sql("Create table yourtable as select * form tempTable")
//Second option for creating hive table from schema
val oldDFF = sparkSession.sql("select * from salary")
//Generate the schema out of dataframe
val schema = oldDFF.schema
//Generate RDD of you data
val rowRDD = sc.parallelize(Seq(Row(100, "a", 123)))
//Creating new DF from data and schema
val newDFwithSchema = sparkSession.createDataFrame(rowRDD, schema)
newDFwithSchema.createOrReplaceTempView("tempTable")
sparkSession.sql("create table FinalTable AS select * from tempTable")
}
Another way is to use methods available on StructType.. sql , simpleString, TreeString etc...
You can create DDLs from a Dataframe's schema, Can create Dataframe's schema from your DDLs ..
Here is one example - ( Till Spark 2.3)
// Setup Sample Test Table to create Dataframe from
spark.sql(""" drop database hive_test cascade""")
spark.sql(""" create database hive_test""")
spark.sql("use hive_test")
spark.sql("""CREATE TABLE hive_test.department(
department_id int ,
department_name string
)
""")
spark.sql("""
INSERT INTO hive_test.department values ("101","Oncology")
""")
spark.sql("SELECT * FROM hive_test.department").show()
/***************************************************************/
Now I have Dataframe to play with. in real cases you'd use Dataframe Readers to create dataframe from files/databases. Let's use it's schema to create DDLs
// Create DDL from Spark Dataframe Schema using simpleString function
// Regex to remove unwanted characters
val sqlrgx = """(struct<)|(>)|(:)""".r
// Create DDL sql string and remove unwanted characters
val sqlString = sqlrgx.replaceAllIn(spark.table("hive_test.department").schema.simpleString, " ")
// Create Table with sqlString
spark.sql(s"create table hive_test.department2( $sqlString )")
Spark 2.4 Onwards you can use fromDDL & toDDL methods on StructType -
val fddl = """
department_id int ,
department_name string,
business_unit string
"""
// Easily create StructType from DDL String using fromDDL
val schema3: StructType = org.apache.spark.sql.types.StructType.fromDDL(fddl)
// Create DDL String from StructType using toDDL
val tddl = schema3.toDDL
spark.sql(s"drop table if exists hive_test.department2 purge")
// Create Table using string tddl
spark.sql(s"""create table hive_test.department2 ( $tddl )""")
// Test by inserting sample rows and selecting
spark.sql("""
INSERT INTO hive_test.department2 values ("101","Oncology","MDACC Texas")
""")
spark.table("hive_test.department2").show()
spark.sql(s"drop table hive_test.department2")
From spark 2.4 onwards you can use the function to get the column names and types (even for nested struct)
val df = spark.read....
df.schema.toDDL
Here is PySpark version to create Hive table from parquet file. You may have generated Parquet files using inferred schema and now want to push definition to Hive metastore. You can also push definition to the system like AWS Glue or AWS Athena and not just to Hive metastore. Here I am using spark.sql to push/create permanent table.
# Location where my parquet files are present.
df = spark.read.parquet("s3://my-location/data/")
cols = df.dtypes
buf = []
buf.append('CREATE EXTERNAL TABLE test123 (')
keyanddatatypes = df.dtypes
sizeof = len(df.dtypes)
print ("size----------",sizeof)
count=1;
for eachvalue in keyanddatatypes:
print count,sizeof,eachvalue
if count == sizeof:
total = str(eachvalue[0])+str(' ')+str(eachvalue[1])
else:
total = str(eachvalue[0]) + str(' ') + str(eachvalue[1]) + str(',')
buf.append(total)
count = count + 1
buf.append(' )')
buf.append(' STORED as parquet ')
buf.append("LOCATION")
buf.append("'")
buf.append('s3://my-location/data/')
buf.append("'")
buf.append("'")
##partition by pt
tabledef = ''.join(buf)
print "---------print definition ---------"
print tabledef
## create a table using spark.sql. Assuming you are using spark 2.1+
spark.sql(tabledef);
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.