Usage Metering
Ingestion Options
AWS S3
5min
write events directly to an amberflo supplied and secured aws s3 bucket amberflo provides you with an aws s3 bucket with access rights and controls to write meters it automatically picks up the meters for processing as they arrive into the bucket failure reports are written back into the bucket there are many ways to write to s3 you can use aws s3 sdk, aws glue, logstash, fluentd, and other tools please contact us to get the s3 bucket provisioned for your account format the meter records you send to the s3 bucket should be of the same standardized format as accepted by the ingest meter records docid\ t8ybtn9tmllhzvbed2w2p here are some examples \[{ "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 5, "metertimeinmillis" 1619445706909, "dimensions" { "region" "us west 2", "az" "az1" } }] we also support ndjson https //github com/ndjson/ndjson spec format (json separated by a newline) { "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 5, "metertimeinmillis" 1619445706909 } { "customerid" "customer 321", "meterapiname" "computehours", "metervalue" 4, "metertimeinmillis" 1619445712341 } { "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 1, "metertimeinmillis" 1619445783456 } code example the s3 object key should have the date to allow easier troubleshooting (e g /ingest/amberdata/06 07 2022/0000a2e4 e6ad 11ec 8293 6a8da1c4f9f0 ); import json from uuid import uuid1 from datetime import date import boto3 records to send = \[{ 'customerid' 'customer 123', 'meterapiname' 'computehours', 'metervalue' 5, 'metertimeinmillis' 1619445706909, 'dimensions' { 'region' 'us west 2', 'az' 'az1' } }] bucket name = '183 amberflo' date = datetime now() strftime(r'%m %d %y') object key = 'ingest/amberdata/' + date + '/' + str(uuid1()) s3 = boto3 resource('s3') s3 object(bucket name, object key) put(body=json dumps(records to send), acl='bucket owner full control') troubleshooting for troubleshooting issues with ingesting through s3, amberflo will generate an s3 file with the failure reason in the following location s3 //\<bucket name>/failed requests/\<date>/\<original uri> reason txt compression (gzip support) you can write compressed files in gzip format with gz extensions amberflo will decompress while ingesting if the file extension is " gz"