Usage Metering
Ingestion Options
AWS CloudWatch
4min
meter ingestion via serverless agent for aws cloudwatch if you want to try out amberflo without making significant changes to your code (and importing a new library), you can configure amberflo's serverless agent for aws cloudwatch to automatically extract data from cloudwatch logs into amberflo as meters what you need to do is configure our serverless agent to attach to your cloudwatch system in your aws vpc you then log the meter records (accompanied with an amberflo tag, so the agent can identify the meter records) the agent will automatically extract all logs with the tag, and ingest them as meters you can find a demo project example here https //github com/amberflo/lambda metering examples it shows a rest api that is served by a lambda function, and how to meter this lambda (for instance, for tracking the number of api calls) in addition to the cloudwatch agent, it shows other ways to meter the lambda to learn more about our integration with aws cloudwatch, please contact us how does it work? here is a step by step description of how the cloudwatch agent works your code runs and logs meter records to a cloudwatch log group (this is the default for an aws lambda); then, a kinesis stream collects all the (relevant) log entries via the subscription filter (if you have multiple log groups you can collect from all of them to the same kinesis stream); finally, a lambda agent (code below) consumes the log entries from the kinesis stream, extract the meter records and send them to amberflo for ingestion if you are familiar with aws sam or cloud formation, you can adapt our demo project https //github com/amberflo/lambda metering examples to your needs and use it to deploy the agent we recommend you use the kinesis stream method, as it is more flexible otherwise, please follow the using cloudwatch logs subscription filters https //docs aws amazon com/amazoncloudwatch/latest/logs/subscriptionfilters html guide for setting up the kinesis stream and cloudwatch subscription filter note that in the example below, the filter pattern is meter record for stream you can use the code below (nodejs) for the kinesis stream consumer lambda 'use strict'; const crypto = require('crypto'); const zlib = require('zlib'); const aws = require('aws sdk'); const bucketname = process env ingest bucket name; const accesskeyid = process env access key; const secretaccesskey = process env secret key; const s3 = new aws s3({ region 'us west 2', accesskeyid, secretaccesskey, }); const prefix = 'meter record for stream'; exports handler = async (event) => { const records = event records map(r => r kinesis data) map(d => // decompress and parse the cloudwatch payload json parse(zlib gunzipsync(buffer from(d, 'base64')) tostring()) ) filter(m => m messagetype !== 'control message') map(p => p logevents map(x => x message) map(m => { const i = m indexof(prefix); if (i < 0) return; // get only messages containing meter records return json parse(m slice(i + prefix length + 1)); }) filter(x => x) ) flat(); await ingest(records); }; async function ingest(records) { const date = new date() toisostring() slice(0, 10); const key = `ingest/records/${date}/${crypto randombytes(20) tostring('hex')} json`; const params = { bucket bucketname, key key, body json stringify(records), }; return s3 putobject(params) promise(); } this code will take log entries like the one below and ingest them in batches through your aws s3 docid\ ybf6t2 oh5f fy0eb2qno s3 bucket info meter record for stream { "meterapiname" "api calls", "customerid" "70f1dd87 6978 4d96 a934 5a83b63cdeb1", "metertimeinmillis" 1663094105062, "metervalue" 1, "uniqueid" "c91b8860 3392 11ed a17b bfa2e899d2c9", "dimensions" { "method" "get" "endpoint" "meter example" } } 📘 ingest record format if you log the meter records in a different format, you can modify the code above to marshal the records into the ingest meter records docid\ t8ybtn9tmllhzvbed2w2p format