简体   繁体   中英

Extracting http status code from “find -exec curl”

I'm trying to extract the http status code from "curl" in the context of a "find -exec" into a variable. I need this to test for failure in order to then issue reports and prevent the script from running again. I can currently extract the code and print to stdout using -write-out but I need it stored within the script for later use.

Currently have something similar to:

find . -cmin -140 -type f -iname '*.gz' -exec curl -T {} --write-out "%{http_code}\n" www.example.com/{} \; 

Sample output:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
  101  7778    0     0  101  7778      0  17000 --:--:-- --:--:-- --:--:-- 17000
  000
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
  101  7795    0     0  101  7795      0  17433 --:--:-- --:--:-- --:--:-- 17433
  000

The '000' is the http status code printing to the console. I would like the keep the curl output in the console window for when this script is manually tested, but I do need to extract the status code for later use in the script.

The -exec argument to find runs in a subprocess which then exits. There is no simple way to smuggle out status to control find . A better approach is to use a loop.

find . -cmin -140 -type f -iname '*.gz' -print0 |
while read -d $'\0' -r file; do
    status=$(curl -T "$file" --write-out "%{http_code}\n" www.example.com/"$file")
    case $status in
     200) ;;
       *) echo "$0: $file fail $status" >&2; break;;
    esac
done

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM