Usage Metering
Ingestion Options
AWS S3
6 min
write events directly to an amberflo supplied and secured aws s3 bucket amberflo provisions an aws s3 bucket for your account with the necessary access rights and permissions to write meter data meter files are automatically picked up for processing as soon as they arrive in the bucket if processing fails, error reports are written back into the same bucket for review there are several ways to write data to s3, including aws s3 sdk aws glue logstash fluentd and other ingestion tools to get your s3 bucket provisioned, please contact us format the meter records you send to the s3 bucket should be of the same standardized format as accepted by the ingest api here are some examples \[{ "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 5, "metertimeinmillis" 1619445706909, "dimensions" { "region" "us west 2", "az" "az1" } }] we also support ndjson format (json separated by a newline) { "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 5, "metertimeinmillis" 1619445706909 } { "customerid" "customer 321", "meterapiname" "computehours", "metervalue" 4, "metertimeinmillis" 1619445712341 } { "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 1, "metertimeinmillis" 1619445783456 } code example the s3 object key should have the date to allow easier troubleshooting (e g /ingest/amberdata/06 07 2022/0000a2e4 e6ad 11ec 8293 6a8da1c4f9f0 ); import json from uuid import uuid1 from datetime import date import boto3 records to send = \[{ 'customerid' 'customer 123', 'meterapiname' 'computehours', 'metervalue' 5, 'metertimeinmillis' 1619445706909, 'dimensions' { 'region' 'us west 2', 'az' 'az1' } }] bucket name = '183 amberflo' date = datetime now() strftime(r'%m %d %y') object key = 'ingest/amberdata/' + date + '/' + str(uuid1()) s3 = boto3 resource('s3') s3 object(bucket name, object key) put(body=json dumps(records to send), acl='bucket owner full control') troubleshooting if you encounter issues while ingesting data through s3, amberflo will generate a failure report in the following s3 path s3 //\<bucket name>/failed requests/\<date>/\<original uri> reason txt this file contains the reason for the failure and can help you diagnose and resolve the issue compression (gzip support) amberflo supports ingesting gzip compressed files simply upload files with the gz extension amberflo will automatically decompress these files during ingestion