[英]How can I read multiple parquet files in spark scala
Note:- There could be 100 date folders, I need to pick only specific(let's say for 25,26 and 28)注意:- 可能有 100 个日期文件夹,我只需要选择特定的(比如说 25,26 和 28)
Is there any better way than below?有没有比下面更好的方法?
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.sql._
val spark = SparkSession.builder.appName("ScalaCodeTest").master("yarn").getOrCreate()
val parquetFiles = List("id=200393/date=2019-03-25", "id=200393/date=2019-03-26", "id=200393/date=2019-03-28")
spark.read.format("parquet").load(parquetFiles: _*)
The above code is working but I want to do something like below-上面的代码正在工作,但我想做如下的事情 -
val parquetFiles = List()
parquetFiles(0) = "id=200393/date=2019-03-25"
parquetFiles(1) = "id=200393/date=2019-03-26"
parquetFiles(2) = "id=200393/date=2019-03-28"
spark.read.format("parquet").load(parquetFiles: _*)
you can read it this way to read all folders in a directory id=200393:您可以通过这种方式读取它以读取目录 id=200393 中的所有文件夹:
val df = spark.read.parquet("id=200393/*")
If you want to select only some dates, for example only september 2019:如果您只想 select 某些日期,例如仅 2019 年 9 月:
val df = spark.read.parquet("id=200393/2019-09-*")
If you have some special days, you can have the list of days in a list如果您有一些特殊的日子,您可以在列表中列出日期
val days = List("2019-09-02", "2019-09-03")
val paths = days.map(day => "id=200393/" ++ day)
val df = spark.read.parquet(paths:_*)
If you want to keep the column 'id', you could try this:如果你想保留列“id”,你可以试试这个:
val df = sqlContext
.read
.option("basePath", "id=200393/")
.parquet("id=200393/date=*")
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.