簡體   English   中英

使用Shell腳本收集python中函數的日志

[英]Collect logs for a function in python using shell script

我的pyspark腳本運行正常。 該腳本將從mysql獲取數據並在HDFS創建配置單元表。

pyspark腳本如下。

#!/usr/bin/env python
import sys
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)

#Condition to specify exact number of arguments in the spark-submit command line
if len(sys.argv) != 8:
    print "Invalid number of args......"
    print "Usage: spark-submit import.py Arguments"
    exit()
table = sys.argv[1]
hivedb = sys.argv[2]
domain = sys.argv[3]
port=sys.argv[4]
mysqldb=sys.argv[5]
username=sys.argv[6]
password=sys.argv[7]

df = sqlContext.read.format("jdbc").option("url", "{}:{}/{}".format(domain,port,mysqldb)).option("driver", "com.mysql.jdbc.Driver").option("dbtable","{}".format(table)).option("user", "{}".format(username)).option("password", "{}".format(password)).load()

#Register dataframe as table
df.registerTempTable("mytempTable")

# create hive table from temp table:
sqlContext.sql("create table {}.{} as select * from mytempTable".format(hivedb,table))

sc.stop()

現在,將使用shell腳本調用此pyspark腳本。 對於此Shell腳本,我將表名作為文件中的參數傳遞。

shell script如下。

#!/bin/bash

source /home/$USER/spark/source.sh
[ $# -ne 1 ] && { echo "Usage : $0 table ";exit 1; }

args_file=$1

TIMESTAMP=`date "+%Y-%m-%d"`
touch /home/$USER/logs/${TIMESTAMP}.success_log
touch /home/$USER/logs/${TIMESTAMP}.fail_log
success_logs=/home/$USER/logs/${TIMESTAMP}.success_log
failed_logs=/home/$USER/logs/${TIMESTAMP}.fail_log

#Function to get the status of the job creation
function log_status
{
       status=$1
       message=$2
       if [ "$status" -ne 0 ]; then
                echo "`date +\"%Y-%m-%d %H:%M:%S\"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
                #echo "Please find the attached log file for more details"
                exit 1
                else
                    echo "`date +\"%Y-%m-%d %H:%M:%S\"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
                fi
}
while read -r table ;do 
  spark-submit --name "${table}" --master "yarn-client" --num-executors 2 --executor-memory 6g  --executor-cores 1 --conf "spark.yarn.executor.memoryOverhead=609" /home/$USER/spark/sql_spark.py ${table} ${hivedb} ${domain} ${port} ${mysqldb} ${username} ${password} > /tmp/logging/${table}.log 2>&1
  g_STATUS=$?
  log_status $g_STATUS "Spark job ${table} Execution"
done < "${args_file}"

echo "************************************************************************************************************************************************************************"

我可以使用上述shell腳本為args_file中的每個表收集日志。

現在我在mysql中有200多個表。 我已經修改了pyspark腳本,如下所示。 我創建了一個函數來遍歷args_file並執行代碼。

New spark script

#!/usr/bin/env python
import sys
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)

#Condition to specify exact number of arguments in the spark-submit command line
if len(sys.argv) != 8:
    print "Invalid number of args......"
    print "Usage: spark-submit import.py Arguments"
    exit()
args_file = sys.argv[1]
hivedb = sys.argv[2]
domain = sys.argv[3]
port=sys.argv[4]
mysqldb=sys.argv[5]
username=sys.argv[6]
password=sys.argv[7]

def testing(table, hivedb, domain, port, mysqldb, username, password):

    print "*********************************************************table = {} ***************************".format(table)
    df = sqlContext.read.format("jdbc").option("url", "{}:{}/{}".format(domain,port,mysqldb)).option("driver", "com.mysql.jdbc.Driver").option("dbtable","{}".format(table)).option("user", "{}".format(username)).option("password", "{}".format(password)).load()

    #Register dataframe as table
    df.registerTempTable("mytempTable")

    # create hive table from temp table:
    sqlContext.sql("create table {}.{} stored as parquet as select * from mytempTable".format(hivedb,table))

input = sc.textFile('/user/XXXXXXX/spark_args/%s' %args_file).collect()

for table in input:
 testing(table, hivedb, domain, port, mysqldb, username, password)

sc.stop()

現在,我想收集args_file單個表的日志。 但是我只得到一個包含所有表的日志的日志文件。

如何達到我的要求? 還是我做的方法是完全錯誤的

新的Shell腳本:

spark-submit --name "${args_file}" --master "yarn-client" --num-executors 2 --executor-memory 6g  --executor-cores 1 --conf "spark.yarn.executor.memoryOverhead=609" /home/$USER/spark/sql_spark.py ${table} ${hivedb} ${domain} ${port} ${mysqldb} ${username} ${password} > /tmp/logging/${args_file}.log 2>&1

您可以做的是編寫一個python腳本,該腳本將使用一個日志文件,並在prints table名之前將日志文件切成一行。

例如:

*************************************table=table1***************

然后下一個日志文件從

*************************************table=table2****************

等等。 您還可以將表名作為文件名

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM