[英]Does Dataproc support Delta Lake format?
Is the Databricks Delta format available with Google's GCP DataProc? Google 的 GCP DataProc 是否提供 Databricks Delta 格式?
For AWS and AZURE it is clear that this is so.对于 AWS 和 AZURE 来说,情况显然如此。 However, when perusing, researching the internet, I am unsure that this is the case.
但是,在仔细阅读,研究互联网时,我不确定情况是否如此。 Databricks docs less clear as well.
Databricks 文档也不太清楚。
I am assuming Google feel their offerings are sufficient.我假设谷歌觉得他们的产品已经足够了。 Eg Google Cloud Storage and is it mutable?
例如谷歌云存储,它是可变的吗? This https://docs.gcp.databricks.com/getting-started/overview.html provides too little context.
这个https://docs.gcp.databricks.com/getting-started/overview.html提供的上下文太少。
Delta Lake format is supported by Dataproc. Dataproc 支持 Delta Lake 格式。 You can simply use it as any other data format like Parquet, ORC.
您可以简单地将其用作任何其他数据格式,例如 Parquet、ORC。 The following is a code example from this article .
以下是本文的代码示例。
# Copyright 2022 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import sys
from pyspark.sql import SparkSession
from delta import *
def main():
input = sys.argv[1]
print("Starting job: GCS Bucket: ", input)
spark = SparkSession\
.builder\
.appName("DeltaTest")\
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")\
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")\
.getOrCreate()
data = spark.range(0, 500)
data.write.format("delta").mode("append").save(input)
df = spark.read \
.format("delta") \
.load(input)
df.show()
spark.stop()
if __name__ == "__main__":
main()
You also need to add the dependency when submitting the job with --properties="spark.jars.packages=io.delta:delta-core_2.12:1.1.0"
.您还需要在使用
--properties="spark.jars.packages=io.delta:delta-core_2.12:1.1.0"
提交作业时添加依赖项。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.