簡體   English   中英

在 PySpark 腳本中找不到 moveToHDFS 文件

[英]moveToHDFS file not found in PySpark script

我有一個奇怪的問題。

當我在終端中運行此命令時,它可以工作並且確實將文件復制到所需的位置。

hdfs dfs -copyFromLocal concsessions.csv /user/username/spark_exports/

但是,當我將它作為我的腳本的一部分運行時(如下),它沒有,我得到這個錯誤 - 誰能幫助我?

我不確定我做錯了什么,絕對是什么!

OSError: [Errno 2] No such file or directory

代碼:

from pyspark.sql import SparkSession
from datetime import datetime

#Set the date for the filename
now = datetime.now()
yday  = long(now.strftime('%s')) - 24*60*60

spark = SparkSession\
.builder\
.appName('wap')\
.master('yarn')\
.enableHiveSupport()\
.getOrCreate()

import datetime
import pyspark.sql.functions as F
from pyspark.sql.functions import col

#The below prints your results to your chosen destination (Hive, Stdout, CSV)

print('data load starting...')

cmd = '''select * from db.conc_sessions'''
df1 = spark.sql(cmd)
df1.printSchema()
print('data ingested successfully')

print('setting variables...')

timestart= '2019-10-14 00:00:00'
timeend= '2019-10-14 23:59:59'
time_to_check = datetime.datetime.strptime(timestart, '%Y-%m-%d %H:%M:%S')

iters = 0
session = 0
add = []

print('begin iteration...')
while iters < 96:

    time_to_add = iters * 900
    time_to_checkx = time_to_check + datetime.timedelta(seconds=time_to_add)
    stringtime = time_to_checkx.strftime("%m/%d/%Y, %H:%M:%S")

    iters = iters + 1

    spark_date_format = "YYYY-MM-dd hh:mm:ss"
    df1 = df1.withColumn('start_timestamp', F.to_timestamp(df1.start_time, spark_date_format))
    df1 = df1.withColumn('end_timestamp', F.to_timestamp(df1.end_time, spark_date_format))

    filterx = df1.filter( (df1.start_time < time_to_checkx) & (df1.end_time > time_to_checkx ))

    session = filterx.count()
    newrow = [stringtime, session]
    add.append(newrow)

import pandas as pd
output = pd.DataFrame.from_records(add)
output.columns = ['time','count']
output = output.groupby(['time'])[['count']].agg('sum').reset_index()
output.to_csv('concsessions.csv', sep=',')

#copy the CSV from the local server to HDFS
import subprocess
subprocess.call("hdfs dfs -copyFromLocal concsessions.csv /user/username/spark_exports/")

您應該在subprocess.call中使用shell=True

來自https://docs.python.org/2/library/subprocess.html#frequently-used-arguments

shell 參數(默認為 False)指定是否使用 shell 作為要執行的程序。 如果 shell 為 True,建議將 args 作為字符串而不是序列傳遞。

在 shell=True 的 Unix 上,shell 默認為 /bin/sh。

如下編輯您的呼叫並查看。

subprocess.call("hdfs dfs -copyFromLocal concsessions.csv /user/username/spark_exports/", shell=True)
subprocess.call(['hdfs',  'dfs',  '-copyFromLocal' , 'concession.csv', 'user/username/spark_exports'])

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM