简体   繁体   中英

Limit step execution in a loop

I have a bash script like this:

#!/bin/bash
while true; do
    sudo tcpdump -i eth0 -w dump.pcap -c 1500 &
    chromium-browser --app http://domain.com &
    sleep 60
    killall chromium-browser
    sudo killall tcpdump

    # do some stuff with pcap file
    # it basically converts the pcap to plain text using tshark
    # then a PHP script parses the plain text

    sleep 240
done

It works fine. But sometimes, for whatever reason, nothing is killed and the script gets stuck in that loop step. It is actually doing nothing, not taking any resource. It seems like the script is waiting for something to be killed.

I've tried limiting the packets captured by tcpdump, but it didn't work. tcpdump finishes its job normally, without having to be killed, but the script doesn't kill chromium and it doesn't proceed with the rest of the code.

Is there any way to detect if a step is taking too long and simply kill everything and move to the next step?

Update

The it is a long code

It converts the pcap file to plain text using tshark and a PHP script parses the plain text. The problem is not on this part because tshark is never called nor the PHP script. Everything stops before that.

Increasing the sleep

It does not reproduce the problem.

If you don't mind processes exiting non-gracefully you could use SIGKILL instead of SIGHUP:

killall -9 chromium-browser
sudo killall tcpdump

However in this case it is preferable to capture the pids of the processes you have started and only kill those (instead of killing all chromium-browers and tcpdump instances).

You can access the pid of the last run process with $!.

sudo tcpdump -i eth0 -w dump.pcap -c 1500 &
tcpdump_pid=$!
chromium-browser --app http://domain.com &
chromium-browser_pid=$!
sleep 60
kill -9 $tcpdump_pid
sudo kill $chromium-browser_pid

To answer your question of "Is there any way to detect if a step is taking too long and simply kill everything and move to the next step?" I would suggest using the timeout utility in coreutils.

timeout 5 sudo kill -9 $chromium-browser_pid

Although it would be advisable to determine why this is hanging, rather than work around it, else you will have a resource leak. It face it will be neater to rewrite your loop in terms of timeout:

sudo timeout 60 tcpdump -i eth0 -w dump.pcap -c 1500 &
timeout 60 chromium-browser --app http://domain.com &
sudo tcpdump -Z root -w dump.pcap -n -i eth0 -G 300 -W 1
G - Timeout Seconds (After timeout period the comman gets killed automatically)

Z - drop root and runs as user privilege

W - Number files to be saved (as a splitted file)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM