[英]how to run multiple shell scripts one by one in single go using another shell script
Dear experts i have a small problem...i am trying to run multiple shell scripts having same extension(.sh) in one go, which are present inside a directory.亲爱的专家我有一个小问题...我正在尝试在一个 go 中运行多个具有相同扩展名(.sh)的 shell 脚本,这些脚本存在于一个目录中。 In so far i wrote a common script like as below.
到目前为止,我编写了一个通用脚本,如下所示。 But problem is that it does not finish running instead it keeps running.I am unable to find out where the problems persist.I hope some expert may look into it.
但问题是它没有完成运行而是继续运行。我无法找出问题仍然存在的地方。我希望有专家可以调查一下。 my small code is as below.
我的小代码如下。 if i do something like
bash scriptone.sh, bash scriptkk.sh
it works fine but i donot want manual way to do it.Thanks.如果我执行
bash scriptone.sh, bash scriptkk.sh
之类的操作,它可以正常工作,但我不想手动操作。谢谢。
#!/bin/sh
for f in *.sh; do
bash "$f" -H
done
You are probably calling yourself recursively您可能正在递归调用自己
#!/bin/sh
for f in *.sh; do
if [ "$f" == "$0" ]; then
continue
else
echo "running: $f"
bash "$f" -H
fi
done
You are running them sequentially.您正在按顺序运行它们。
Maybe one of the other scripts is still going?也许其他脚本之一仍在运行?
Try starting them all in background.尝试在后台启动它们。
Simple version -简单版——
for f in *.sh; do bash "$f" -H & done
If there's any output this will be a mess though.如果有任何 output 这将是一团糟。 Likewise, if you log out they will crash.
同样,如果您注销,它们将崩溃。 Here's an elaborated version to handle such things -
这是处理此类事情的详细版本-
for f in *.sh; do
nohup bash "$f" -H <&- > "$f.log" 2>&1 &
done
The &
at the end puts it into background so that the loop can start the next one without waiting for the current $f
to finish.最后的
&
将其置于后台,以便循环可以开始下一个循环,而无需等待当前的$f
完成。 nohup
catches SIGHUP, so if it takes a long time you can disconnect and come back later. nohup
捕获 SIGHUP,因此如果需要很长时间,您可以断开连接并稍后再回来。
<&-
closes stdin. <&-
关闭标准输入。 > "$f.log"
gives each script a log of its own so you can check them individually without them getting all intermixed. > "$f.log"
为每个脚本提供了自己的日志,因此您可以单独检查它们,而不会将它们混合在一起。 2>&1
just makes sure any error output goes into the same log as the stdout - be aware that stderr is unbuffered, while stdout IS buffered, so if your error seems to be in a weird place (too early) in the log, switch it around: 2>&1
只是确保任何错误 output 进入与 stdout 相同的日志 - 请注意 stderr 是无缓冲的,而 stdout 是缓冲的,所以如果你的错误似乎在日志中的一个奇怪的地方(太早),请切换它周围:
nohup bash "$f" -H <&- 2>"$f.log" 1>&2 &
which ought to unbuffer them both and keep them collated.这应该对它们都进行缓冲并保持它们的整理。
Why do you give them all the same -H
argument?为什么你给他们都一样的
-H
论点?
Since you mention below that you have 5k scripts to run, that kind of maybe explains why it's taking so long... You might not want to pound the server with all those at once .既然你在下面提到你有 5k 脚本要运行,那也许可以解释为什么它需要这么长时间......你可能不想一次用所有这些来敲击服务器。 Let's elaborate that just a little more...
让我们再详细说明一下……
Minimally, I'd do something like this:至少,我会做这样的事情:
for f in *.sh; do
nohup nice "$f" -H <&- > "$f.log" 2>&1 &
sleep 0.1 # fractional seconds ok, longer pause means fewer per sec
done
This will start nine or ten per second until all of them have been processed, and nohup nice
will run $f
with a lower priority so normal system requests will be able to get ahead of it.这将每秒开始 9 或 10 次,直到所有这些都被处理完毕,并且
nohup nice
将以较低的优先级运行$f
,因此正常的系统请求将能够领先于它。
A better solution might be par
or parallel
.更好的解决方案可能是
par
或parallel
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.