Usage Metering
Ingestion Options
Elastic Logstash
9 min
you can integrate metering data into amberflo using the open source tool logstash how it works the integration utilizes the logstash s3 output plugin to write metering records to your amberflo provided s3 bucket you can use logstash's powerful parsing language to extract metering data from your logs or repositories data sources logstash supports a wide variety of input plugins , allowing you to extract data from files elasticsearch jdbc s3 mongodb and many other systems this flexibility makes it easy to integrate metering into your existing infrastructure https //www elastic co/guide/en/logstash/current/input plugins html example logstash configuration the following is a sample logstash configuration that reads log lines from a file and writes the relevant metering records to amberflo’s s3 bucket input setup logstash will tail files located at /users/demoaccount/input/ sample file content log line 1 log line 2 mymeter 2 mycustomerid amberflo meter log line 3 how it works logstash continuously monitors the input folder for new files it filters log lines using the condition contains amberflo meter matching lines are split by space (" ") the split values are mapped as follows meterapiname → first item metervalue → second item customerid → third item the records are then transformed using mutate logic and forwarded to amberflo's s3 bucket input { file { path => "/users/demoaccount/input/ " } } filter { if "amberflo meter" in \[message]{ ruby { code => "event set('time', ((event get('@timestamp') to f 1000) to i) to s)" } mutate { split => { "message" => " " } add field => { "meterapiname" => "%{\[message]\[0]}" } add field => { "metervalue" => "%{\[message]\[1]}" } add field => { "customerid" => "%{\[message]\[2]}" } remove field => \["host","message","@version","@timestamp"] } }else{ drop {} } } output { s3{ access key id => "xxxxxx" secret access key => "yyyyyyyyyyyyyy" region => "us west 2" bucket => "demo amberflo" size file => 2048 time file => 5 codec => "json" canned acl => "bucket owner full control" } } the resulting file (meter record) will be { 	"meterapiname" "mymeter", 	"metervalue" "2", 	"customerid" "mycustomerid", 	"time" "1621619742810", 	"path" "/users/demoaccount/input/sample txt" } jdbc source you can use the logstash jdbc input plugin to extract metering data directly from your database or repository this is useful when your metering data is stored in structured tables the plugin allows you to run sql queries to retrieve only the relevant data for metering for more details, refer to the official documentation https //www elastic co/guide/en/logstash/current/plugins inputs jdbc html input { jdbc { statement => "select id, mycolumn1, mycolumn2 from my table where id > \ sql last value" use column value => true tracking column => "id" jdbc driver library => "mysql connector java 5 1 36 bin jar" jdbc driver class => "com mysql jdbc driver" jdbc connection string => "jdbc\ mysql //localhost 3306/mydb" jdbc user => "mysql" schedule => " " \# other configuration bits } } elasticsearch input if your metering data is stored in elasticsearch logs , you can use the elasticsearch input plugin to extract it into logstash here’s a basic example configuration input { elasticsearch { hosts => "search myes cluster us west 2 es amazonaws com 443" index => "cwl aws lambda meter definition api lambda 2021 07" query => '{ "query" {"bool" {"filter" \[{"range" {"@timestamp" {"from" "now 1d/d", "to" "now/d", "include lower" true, "include upper" true, "format" "epoch millis", "boost" 1 } } }, {"query string" {"query" " amberflo meter ", "default field" "@message", "fields" \[], "type" "best fields", "default operator" "or", "max determinized states" 10000, "enable position increments" true, "fuzziness" "auto", "fuzzy prefix length" 0, "fuzzy max expansions" 50, "phrase slop" 0, "escape" false, "auto generate synonyms phrase query" true, "fuzzy transpositions" true, "boost" 1 } } ], "adjust pure negative" true, "boost" 1 } }, "aggregations" {} }' size => 500 scroll => "5m" docinfo => true ssl => true user => "myuser" password => "mypwd" \#schedule => " /1 " } }