简体   繁体   中英

Monitoring EC2 disk metrics using aws mon scripts

I am using aws mon scripts for monitoring EC2 instances using following script :

#!/bin/bash
a="$(df | grep /dev/ | awk {'print $1'})"
IFS=' ' read -r -a array <<< $a
#echo "${array[0]}"
for element in "${array[@]}"
do
/opt/aws-scripts-mon/mon-put-instance-data.pl --mem-util --disk-space-util  --swap-util --disk-path="$element" --aws-credential-file=/opt/aws-scripts-mon/awscreds.template
done

Issue is it shows udev in cloudwatch for few machines instead of disk xvda1. Also,when I run this shell script in debug mode, it is compiled as xvda1 but passed to cloudwatch as udev.

If you read the docs it states that you need to give the mount point.

--disk-path=PATH Selects the disk on which to report.

PATH can specify a mount point or any file located on a mount point for the filesystem that needs to be reported

Your script is giving the file-system point where as if you see the df output we need to give mount point.

a="$(df | grep /dev/ | awk {'print $6'})" 

This should solve your problem. So for xvda1 it will like this --disk-path=/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM