简体   繁体   English

在Java中使用Apache Spark读取TSV文件的最佳方法

[英]Best way to read TSV file using Apache Spark in java

I have a TSV file, where the first line is the header. 我有一个TSV文件,其中第一行是标题。 I want to create a JavaPairRDD from this file. 我想从该文件创建一个JavaPairRDD。 Currently, I'm doing so with the following code: 目前,我正在使用以下代码进行操作:

TsvParser tsvParser = new TsvParser(new TsvParserSettings());
List<String[]> allRows;
List<String> headerRow;
try (BufferedReader reader = new BufferedReader(new FileReader(myFile))) {
        allRows = tsvParser.parseAll((reader));
        //Removes the header row
        headerRow = Arrays.asList(allRows.remove(0));
    }
JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
            .parallelize(allRows)
            .mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));

I was wondering if there was a way to have the javaSparkContext read and process the file directly instead of splitting the operation into two parts. 我想知道是否有一种方法可以让javaSparkContext直接读取和处理文件,而不是将操作分为两部分。

EDIT: This is not a duplicate of How do I convert csv file to rdd , because I'm looking for an answer in Java, not Scala. 编辑:这不是如何将csv文件转换为rdd的副本,因为我正在寻找Java中的答案,而不是Scala。

use https://github.com/databricks/spark-csv 使用https://github.com/databricks/spark-csv

import org.apache.spark.sql.SQLContext

SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read()
    .format("com.databricks.spark.csv")
    .option("inferSchema", "true")
    .option("header", "true")
    .option("delimiter","\t")
    .load("cars.csv");

df.select("year", "model").write()
    .format("com.databricks.spark.csv")
    .option("header", "true")
    .save("newcars.csv");

Try below code to read CSV file and create JavaPairRDD. 尝试下面的代码读取CSV文件并创建JavaPairRDD。

public class SparkCSVReader {

public static void main(String[] args) {

    SparkConf conf = new SparkConf().setAppName("CSV Reader");
    JavaSparkContext sc = new JavaSparkContext(conf);
    JavaRDD<String> allRows = sc.textFile("c:\\temp\\test.csv");//read csv file
    String header = allRows.first();//take out header
    JavaRDD<String> filteredRows = allRows.filter(row -> !row.equals(header));//filter header
    JavaPairRDD<String, MyCSVFile> filteredRowsPairRDD = filteredRows.mapToPair(parseCSVFile);//create pair
    filteredRowsPairRDD.foreach(data -> {
        System.out.println(data._1() + " ### " + data._2().toString());// print row and object
    });
    sc.stop();
    sc.close();
}

private static PairFunction<String, String, MyCSVFile> parseCSVFile = (row) -> {
    String[] fields = row.split(",");
    return new Tuple2<String, MyCSVFile>(row, new MyCSVFile(fields[0], fields[1], fields[2]));
};

}

You can also use Databricks spark-csv ( https://github.com/databricks/spark-csv ). 您还可以使用Databricks spark-csv( https://github.com/databricks/spark-csv )。 spark-csv is also included in Spark 2.0.0. spark-csv也包含在Spark 2.0.0中。

I'm the author of uniVocity-parsers and can't help you much with spark, but I believe something like this can work for you: 我是uniVocity-parsers的作者,无法为您提供很多帮助,但是我相信类似的方法可以为您工作:

parserSettings.setHeaderExtractionEnabled(true); //captures the header row

parserSettings.setProcessor(new AbstractRowProcessor(){
        @Override
        public void rowProcessed(String[] row, ParsingContext context) {
            String[] headers = context.headers() //not sure if you need them
            JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
                    .mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
            //process your stuff.
        }
    });

If you want to paralellize the processing of each row, you can wrap a ConcurrentRowProcessor : 如果要对每行的处理进行并行处理,可以包装ConcurrentRowProcessor

parserSettings.setProcessor(new ConcurrentRowProcessor(new AbstractRowProcessor(){
        @Override
        public void rowProcessed(String[] row, ParsingContext context) {
            String[] headers = context.headers() //not sure if you need them
            JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
                    .mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
            //process your stuff.
        }
    }, 1000)); //1000 rows loaded in memory.

Then just call to parse: 然后只需调用即可解析:

new TsvParser(parserSettings).parse(myFile);

Hope this helps! 希望这可以帮助!

Apache Spark 2.x have built-in csv reader so you don't have to use https://github.com/databricks/spark-csv Apache Spark 2.x具有内置的csv阅读器,因此您不必使用https://github.com/databricks/spark-csv

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

/**
 *
 * @author cpu11453local
 */
public class Main {
    public static void main(String[] args) {


        SparkSession spark = SparkSession.builder()
                .master("local")
                .appName("meowingful")
                .getOrCreate();

        Dataset<Row> df = spark.read()
                    .option("header", "true")
                    .option("delimiter","\t")
                    .csv("hdfs://127.0.0.1:9000/data/meow_data.csv");

        df.show();
    }
}

And maven file pom.xml 和Maven文件pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.meow.meowingful</groupId>
    <artifactId>meowingful</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>


        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
    </dependencies>

</project>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM