简体   繁体   English

如何从文件中获取传递给shell脚本的单个参数的日志

[英]How to get logs of individual argument passed to shell script from a file

I have a shell script. 我有一个shell脚本。 In this script I am reading table names for a file and executing a command. 在这个脚本中,我正在读取文件的表名并执行命令。

The script is working fine. 脚本运行正常。 I am able execute the command for all the tables in the file. 我能够为文件中的所有表执行命令。

shell script

#!/bin/bash

[ $# -ne 1 ] && { echo "Usage : $0 input file "; exit 1; }
args_file=$1

TIMESTAMP=`date "+%Y-%m-%d"`
touch /home/$USER/logs/${TIMESTAMP}.success_log
touch /home/$USER/logs/${TIMESTAMP}.fail_log 
success_logs=/home/$USER/logs/${TIMESTAMP}.success_log
failed_logs=/home/$USER/logs/${TIMESTAMP}.fail_log

#Function to get the status of the job creation
function log_status
{
       status=$1
       message=$2
       if [ "$status" -ne 0 ]; then
                echo "`date +\"%Y-%m-%d %H:%M:%S\"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
                #echo "Please find the attached log file for more details"
                #exit 1
                else
                    echo "`date +\"%Y-%m-%d %H:%M:%S\"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
                fi
}

while read table ;do 
  spark-submit hive.py $table 
done < ${args_file}

g_STATUS=$?
log_status $g_STATUS "Spark ${table}"

In this script I want to collect status logs and stdout logs. 在此脚本中,我想收集status logsstdout日志。 I want to collect the logs for each table in the file individually. 我想单独收集文件中每个表的日志。

I want to know if the execution of spark-submit has been successful or failed for each table in the file. 我想知道文件中每个表的spark-submit执行是成功还是失败。 Say the status logs 说出status logs

How can I collect stdout files for each table individually and store them at a location in Linux . 如何单独收集每个表的stdout文件并将它们存储在Linux中的某个位置。

What are the changes I need to do to achieve my results. 我需要做些什么来实现我的结果。

Make sure just to re-direct ( stdout ) of the logs generated for each of the table instance in your script to a folder under /var/log/ may be call it as myScriptLogs 确保只是将为脚本中的每个table实例生成的日志myScriptLogsstdout )到/var/log/下的文件夹,可以将其称为myScriptLogs

mkdir -p /var/log/myScriptLogs || { echo "mkdir failed"; exit; }

while read -r table ;do 
  spark-submit hive.py "$table" > /var/log/myScriptLogs/"${table}_dump.log" 2>&1 
done < "${args_file}" 

The script will fail if you are not able to create a new directory using mkdir for some reason. 如果由于某种原因无法使用mkdir创建新目录,则脚本将失败。 So this creates a log for each table being processed under /var/log as <table_name>_dump.log which you can change it to however way you want. 因此,这会在/var/log<table_name>_dump.log /var/log <table_name>_dump.log创建一个日志,您可以按照自己的方式将其更改为<table_name>_dump.log

Couple of best practices would be to use -r flag in read and double-quote shell variables. 最佳实践的几个方法是在read和双引用shell变量中使用-r标志。


Answer updated to redirect stderr also to the log file. 更新后更新以将stderr重定向到日志文件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM