簡體   English   中英

Pyspark將數據寫入配置單元

[英]Pyspark writing data into hive

下面是我將數據寫入Hive的代碼

from pyspark import since,SparkContext as sc
from pyspark.sql import SparkSession
from pyspark.sql.functions import _functions , isnan
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark import HiveContext as hc


spark = SparkSession.builder.appName("example-spark").config("spark.sql.crossJoin.enabled","true").config('spark.sql.warehouse.dir',"file:///C:/spark-2.0.0-bin-hadoop2.7/bin/metastore_db/spark-warehouse").config('spark.rpc.message.maxSize','1536').getOrCreate()
Name= spark.read.csv("file:///D:/valid.csv", header="true",inferSchema = 
True,sep=',')

join_df=join_df.where("LastName != ''").show()  
join_df.registerTempTable("test")
hc.sql("CREATE TABLE dev_party_tgt_repl STORED AS PARQUETFILE AS SELECT * from dev_party_tgt")

執行上面的代碼后,我得到以下錯誤

Traceback (most recent call last):
File "D:\01 Delivery Support\01 
easyJet\SparkEclipseWorkspace\SparkTestPrograms\src\NameValidation.py", line 
22, in <module>
join_df.registerTempTable("test")
AttributeError: 'NoneType' object has no attribute 'test'

我的系統環境詳細信息:

  • 操作系統:Windows
  • 日食霓虹燈
  • Spark版本:2.0.0

嘗試這個:

join_df.where("LastName != ''").write.saveAsTable("dev_party_tgt_repl")

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM