简体   繁体   English

Bash 重定向标准输出和标准错误以分离带有时间戳的文件

[英]Bash redirect stdout and stderr to seperate files with timestamps

Want to log all stdout and stderr to seperate files and add timestamp to each line.想要记录所有stdoutstderr以分离文件并为每一行添加时间戳。

Tried the following, which works but is missing timestamps.尝试了以下方法,该方法有效,但缺少时间戳。

#!/bin/bash

debug_file=./stdout.log
error_file=./stderr.log

exec > >(tee -a "$debug_file") 2> >(tee -a "$error_file")

echo "hello"
echo "hello world"

this-will-fail
and-so-will-this

Adding timestamps.添加时间戳。 (Only want timestamps prefixed to log output.) (只需要时间戳前缀到日志输出。)

#!/bin/bash

debug_file=./stdout.log
error_file=./stderr.log

log () {
  file=$1; shift 
  while read -r line; do
    printf '%(%s)T %s\n' -1 "$line"
  done >> "$file"
}

exec > >(tee >(log "$debug_file")) 2> >(tee >(log "$error_file"))

echo "hello"
echo "hello world"

this-will-fail
and-so-will-this

The latter adds timestamps to the logs but it also has the chance of messing up my terminal window.后者在日志中添加了时间戳,但它也有可能弄乱我的终端窗口。 Reproducing this behavior was not straight forward, it only happend every now and then.重现这种行为并不是一蹴而就的,它只是时不时地发生。 I suspect it has to with the subroutine/buffer still having output flowing through it.我怀疑它与子程序/缓冲区仍然有输出流过它有关。

Examples of the script messing up my terminal.脚本的例子弄乱了我的终端。

# expected/desired behavior
user@system:~ ./log_test
hello
hello world
./log_test: line x: this-will-fail: command not found
./log_test: line x: and-so-will-this: command not found
user@system:~ # <-- cursor blinks here

# erroneous behavior
user@system:~ ./log_test
hello
hello world
user@system:~ ./log_test: line x: this-will-fail: command not found
./log_test: line x: and-so-will-this: command not found
# <-- cursor blinks here

# erroneous behavior
user@system:~ ./log_test
hello
hello world
./log_test: line x: this-will-fail: command not found
user@system:~
./log_test: line x: and-so-will-this: command not found
# <-- cursor blinks here

# erroneous behavior
user@system:~ ./log_test
hello
hello world
user@system:~
./log_test: line x: this-will-fail: command not found
./log_test: line x: and-so-will-this: command not found
# <-- cursor blinks here

For funs I put a sleep 2 at the end of the script to see what would happen and the problem never occurred again.为了好玩,我在脚本末尾放置了sleep 2以查看会发生什么并且问题再也没有发生过。

Hopefully someone knows the answer or can point me in the right derection.希望有人知道答案或可以指出我正确的说法。

Thanks谢谢

Edit编辑

Judging from another question answered by Charles Duffy, what I'm trying to achieve is not really possible in bash.从 Charles Duffy 回答的另一个问题来看,我试图实现的目标在 bash 中是不可能实现的。 Separately redirecting and recombining stderr/stdout without losing ordering 在不丢失顺序的情况下单独重定向和重新组合 stderr/stdout

The trick is to make sure that tee , and the process substitution running your log function, exits before the script as a whole does -- so that when the shell that started the script prints its prompt, there isn't any backgrounded process that might write more output after it's done.诀窍是确保tee和运行log函数的进程替换在整个脚本之前退出 - 这样当启动脚本的 shell 打印其提示时,没有任何后台进程可能完成后写更多输出。

As a working example (albeit one focused more on being explicit than terseness):作为一个工作示例(尽管一个更侧重于明确而不是简洁):

#!/usr/bin/env bash
stdout_log=stdout.log; stderr_log=stderr.log

log () {
  file=$1; shift
  while read -r line; do
    printf '%(%s)T %s\n' -1 "$line"
  done >> "$file"
}

# first, make backups of your original stdout and stderr
exec {stdout_orig_fd}>&1 {stderr_orig_fd}>&2
# for stdout: start your process substitution, record its PID, start tee, record *its* PID
exec {stdout_log_fd}> >(log "$stdout_log"); stdout_log_pid=$!
exec {stdout_tee_fd}> >(tee "/dev/fd/$stdout_log_fd"); stdout_tee_pid=$!
exec {stdout_log_fd}>&- # close stdout_log_fd so the log process can exit when tee does
# for stderr: likewise
exec {stderr_log_fd}> >(log "$stderr_log"); stderr_log_pid=$!
exec {stderr_tee_fd}> >(tee "/dev/fd/$stderr_log_fd" >&2); stderr_tee_pid=$!
exec {stderr_log_fd}>&- # close stderr_log_fd so the log process can exit when tee does
# now actually swap out stdout and stderr for the processes we started
exec 1>&$stdout_tee_fd 2>&$stderr_tee_fd {stdout_tee_fd}>&- {stderr_tee_fd}>&-

# ...do the things you want to log here...
echo "this goes to stdout"; echo "this goes to stderr" >&2

# now, replace the FDs going to tee with the backups...
exec >&"$stdout_orig_fd" 2>&"$stderr_orig_fd"

# ...and wait for the associated processes to exit.
while :; do
  ready_to_exit=1
  for pid_var in stderr_tee_pid stderr_log_pid stdout_tee_pid stdout_log_pid; do
    # kill -0 just checks whether a PID exists; it doesn't actually send a signal
    kill -0 "${!pid_var}" &>/dev/null && ready_to_exit=0
  done
  (( ready_to_exit )) && break
  sleep 0.1 # avoid a busy-loop eating unnecessary CPU by sleeping before next poll
done

So What's With The File Descriptor Manipulation?那么文件描述符操作是什么?

A few key concepts to make sure we have clear:确保我们清楚的几个关键概念:

  • All subshells have their own copies of the file descriptor table as created when they were fork() ed off from their parent.所有子 shell 都有自己的文件描述符表副本,这些副本是在从其父级fork()创建时创建的。 From that point forward, each file descriptor table is effectively independent.从那时起,每个文件描述符表实际上都是独立的。
  • A process reading from (the read end of) a FIFO (or pipe) won't see an EOF until all programs writing to (the write end of) that FIFO have closed their copies of the descriptor.从 FIFO(或管道)(的读取端)读取的进程将不会看到 EOF,直到所有写入(写入端)该 FIFO 的程序都关闭了它们的描述符副本。

...so, if you create a FIFO pair, fork() off a child process, and let the child process write to the write end of the FIFO, whatever's reading from the read end will never see an EOF until not just the child, but also the parent, closes their copies. ...因此,如果您创建一个 FIFO 对, fork()关闭子进程,并让子进程写入 FIFO 的写入端,那么从读取端读取的任何内容都将永远不会看到 EOF,直到不仅仅是子进程,还有父级,关闭他们的副本。

Thus, the gymnastics you see here:因此,您在这里看到的体操:

  • When we run exec {stdout_log_fd}>&- , we're closing the parent shell 's copy of the FIFO writing to the log function for stdout, so the only remaining copy is the one used by the tee child process -- so that when tee exits, the subshell running log exits too.当我们运行exec {stdout_log_fd}>&- ,我们正在关闭父 shell的 FIFO 副本写入 stdout 的log函数,因此唯一剩余的副本是tee子进程使用的副本——因此当tee退出时,子shell运行log也会退出。
  • When we run exec 1>&$stdout_tee_fd {stdout_tee_fd}>&- , we're doing two things: First, we make FD 1 a copy of the file descriptor whose number is stored in the variable stdout_tee_fd ;当我们运行exec 1>&$stdout_tee_fd {stdout_tee_fd}>&- ,我们做了两件事:首先,我们使 FD 1 成为文件描述符的副本,其编号存储在变量stdout_tee_fd second, we delete the stdout_tee_fd entry from the file descriptor table, so only the copy on FD 1 remains.其次,我们从文件描述符表中删除stdout_tee_fd条目,因此只保留 FD 1 上的副本。 This ensures that later, when we run exec >&"$stdout_orig_fd" , we're deleting the last remaining write handle to the stdout tee function, causing tee to get an EOF on stdin (so it exits, thus closing the handle it holds on the log function's subshell and letting that subshell exit as well).这确保稍后,当我们运行exec >&"$stdout_orig_fd" ,我们将删除 stdout tee函数的最后一个剩余写句柄,导致tee在 stdin 上获得一个 EOF(因此它退出,从而关闭它持有的句柄在log函数的子外壳上并让该子外壳退出)。

Some Final Notes On Process Management关于流程管理的一些最终说明

Unfortunately, how bash handles subshells created for process substitutions has changed substantially between still-actively-deployed releases;不幸的是,bash 处理为进程替换创建的子 shell 的方式在仍在积极部署的版本之间发生了很大的变化。 so while in theory it's possible to use wait "$pid" to let a process substitution exit and collect its exit status, this isn't always reliable -- hence the use of kill -0 .因此,虽然理论上可以使用wait "$pid"来让进程替换退出并收集其退出状态,但这并不总是可靠的——因此使用kill -0

However, if wait "$pid" worked, it would be strongly preferable, because the wait() syscall is what removes a previously-exited process's entry from the process table: It is guaranteed that a PID will not be reused (and a zombie process-table entry is left as a placeholder) if no wait() or waitpid() invocation has taken place.但是,如果wait "$pid"起作用,那将是非常可取的,因为wait()系统调用是从进程表中删除先前退出的进程条目的方法:保证 PID 不会被重用(和僵尸进程)如果没有发生wait()waitpid()调用,进程表条目将作为占位符保留。

Modern operating systems try fairly hard to avoid short-term PID reuse, so wraparound is not an active concern in most scenarios.现代操作系统非常努力地避免短期 PID 重用,因此在大多数情况下,环绕并不是一个积极的关注点。 However, if you're worried about this, consider using the flock -based mechanism discussed in https://stackoverflow.com/a/31552333/14122 for waiting for your process substitutions to exit, instead of kill -0 .但是,如果您对此感到担心,请考虑使用https://stackoverflow.com/a/31552333/14122 中讨论的基于flock的机制来等待您的进程替换退出,而不是kill -0

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM