简体   繁体   中英

How I can load csv data into hive using Spark dataframes?

I am trying to load data from a csv file to Hive. I am using JAVA API of spark for doing that. I want to know how I can load data in hive using spark dataframes.
Here is what I try to make it using JSON:

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SQLContext;
public class first {
public static void main (String[] args)
{
    String inputFileName = "samples/big.txt" ;
    String outputDirName = "output" ;

    SparkConf conf = new SparkConf().setAppName("org.sparkexample.WordCount").setMaster("local");
    JavaSparkContext context = new JavaSparkContext(conf);
    @SuppressWarnings("deprecation")
    SQLContext sc = new SQLContext(context);
    DataFrame input = sc.jsonFile(inputFileName);
    input.printSchema();
}
}

But don't know how to make it using csv. I have some idea about Spark-csv provided by databricks.
Kindly let me know how I can do it.

On spark 2.xx csv is built in (no need for package) Try to read like this:

SparkSession spark = SparkSession
.builder()
.appName("org.sparkexample.WordCount")
.master("local[*]") .
.enableHiveSupport()
.getOrCreate();
DataFrame input = spark.read.csv(inputFileName)

You can also add options for example:

DataFrame input = spark.read.option("header","true").csv(inputFileName)

will consider the first line to be a header and give the column names accordingly

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM