Usage Metering
Ingestion Options
AWS CloudWatch
5 min
meter ingestion via serverless agent for aws cloudwatch if you’d like to try amberflo without modifying your existing code or importing new libraries , you can use our serverless agent for aws cloudwatch to automatically extract meter data from your cloudwatch logs how it works configure the serverless agent connect amberflo's serverless agent to your aws cloudwatch environment (within your aws vpc) log meter records with tags log your meter events in cloudwatch, including a special amberflo tag this tag tells the agent which log entries represent meter data automatic extraction and ingestion the agent scans the logs, identifies entries with the amberflo tag, and ingests them as meter events into amberflo—automatically example project we provide a demo project that shows a rest api powered by aws lambda how to log and meter the number of api calls using cloudwatch other approaches for metering lambda functions how does it work? here is a step by step overview of how the cloudwatch agent operates your code logs meter records your application logs meter records to a cloudwatch log group (this is the default behavior for aws lambda functions) kinesis stream collects the logs a kinesis stream captures the relevant log entries via a cloudwatch subscription filter if you have multiple log groups, you can route them all to the same kinesis stream lambda agent processes the stream a lambda function (the agent) consumes log entries from the kinesis stream, extracts the meter records , and sends them to amberflo for ingestion note in the example setup, the filter pattern used is meter record for stream you can use the code below (nodejs) for the kinesis stream consumer lambda 'use strict'; const crypto = require('crypto'); const zlib = require('zlib'); const aws = require('aws sdk'); const bucketname = process env ingest bucket name; const accesskeyid = process env access key; const secretaccesskey = process env secret key; const s3 = new aws s3({ region 'us west 2', accesskeyid, secretaccesskey, }); const prefix = 'meter record for stream'; exports handler = async (event) => { const records = event records map(r => r kinesis data) map(d => // decompress and parse the cloudwatch payload json parse(zlib gunzipsync(buffer from(d, 'base64')) tostring()) ) filter(m => m messagetype !== 'control message') map(p => p logevents map(x => x message) map(m => { const i = m indexof(prefix); if (i < 0) return; // get only messages containing meter records return json parse(m slice(i + prefix length + 1)); }) filter(x => x) ) flat(); await ingest(records); }; async function ingest(records) { const date = new date() toisostring() slice(0, 10); const key = `ingest/records/${date}/${crypto randombytes(20) tostring('hex')} json`; const params = { bucket bucketname, key key, body json stringify(records), }; return s3 putobject(params) promise(); } this code will take log entries like the one below and ingest them in batches through your aws s3 docid\ ybf6t2 oh5f fy0eb2qno s3 bucket info meter record for stream { "meterapiname" "api calls", "customerid" "70f1dd87 6978 4d96 a934 5a83b63cdeb1", "metertimeinmillis" 1663094105062, "metervalue" 1, "uniqueid" "c91b8860 3392 11ed a17b bfa2e899d2c9", "dimensions" { "method" "get" "endpoint" "meter example" } } 📘 ingest record format if you log the meter records in a different format, you can modify the code above to marshal the records into the amberflo ingest format