简体   繁体   中英

How to handle out of memory gracefully in shell scripts

Is there a way to gracefully handle out-of-memory conditions in a shell script?

$ cat test.sh
#!/bin/sh
i=asdf
while true; do
  i="$i $i"
done
$ bash test.sh
test.sh: xrealloc: cannot allocate 18446744072098939008 bytes

Many programming languages allow handling out-of-memory exceptions using simple try-catch constructs. Is it possible to gracefully handle out-of-memory conditions in shell scripts / Bash as well? How?

Would it be possible to either free temporary buffers and attempt to continue execution, or do some custom error handling (save state) and exit with error?

Not that I'm aware of. Instead, when you hit a problem like this, the normal approach is to raise the limits via ulimit .

ulimit -m N # for the heap
ulimit -s N # for the stack

However, to programmatically detect it, you'd have to do functionality similar to what strace does and watch for ENOMEM .

There's nothing as elegant as a try/catch on out of memory exception for Bash.

But you do have options for monitoring your memory consumption yourself.

http://man7.org/linux/man-pages/man5/proc.5.html

The simplest thing is to monitor the Kernel's OOM (out of memory) score for your process like this:

cat /proc/$$/oom_score

See more on the origin of the OOM score here: https://serverfault.com/a/571326/177301

The OOM score is a roughly percent-times-ten number .

Using your example, that would work like this:

#!/bin/sh

i=asdf
while true; do
  i="$i $i"

  if [ $(cat /proc/$$/oom_score) -gt 200 ]
  then
    echo "Abort: OOM score over 200" 1>&2
    exit 1
  fi

done

I experimented with this a little. On my Ubuntu VM the kernel would kill the script right after the OOM score hit 241. Your mileage may vary.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM