Pipe your metric to elasticsearch from your terminal.
use case example:
load avg a linux machine
cat /proc/loadavg | awk '{print $1}' | ./pipes -i load-avg-load-index -h $host -v
will produce
{
"value": "1.2",
"@timestamp": "1497399556",
"hostname": "my-host",
}
/home directory size for each user
du -shBK * | awk -v OFS='\t' '{print $2, $1}' | ./pipes -m key_value -i home-dir-index -h $host -v
will create 3 event, like this:
{
"key": "joe",
"value": "8000K",
"@timestamp": "1497399556",
"hostname": "my-host",
}
--
{
"key": "ubuntu",
"value": "16000K",
"@timestamp": "1497399556",
"hostname": "my-host",
}
--
{
"key": "mark",
"value": "32000K",
"@timestamp": "1497399556",
"hostname": "my-host",
}
List of all options
Usage of ./pipes:
--dry-run
Enable dry-run mode (just output json, without make request)
-h, --host string
Elasticsearch host (default "localhost")
-i, --index string
Index name
-m, --mode string
Mode (single_value|key_value) (default "single_value")
-p, --password string
Basic auth password
-P, --port string
Elasticsearch port (default "9200")
-t, --type string
Index log type (default "log")
-u, --user string
Basic auth username
-v, --verbose
Verbose output