Usage Metering
Ingestion Options
AWS SQS
5 min
write events directly to an amberflo supplied and secured aws sqs queue amberflo provides your account with a fifo (first in first out) aws sqs queue that includes the necessary access rights and permissions to submit meter records meter records are automatically processed as they arrive in the queue no additional configuration is needed on your end for ingestion to get your sqs queue and s3 bucket provisioned , please contact us format the meter records sent to the sqs queue must follow the same standardized format as used by the amberflo ingest api here are some examples \[{ "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 5, "metertimeinmillis" 1619445706909, "dimensions" { "region" "us west 2", "az" "az1" } }] we also support ndjson format (json separated by a newline) { "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 5, "metertimeinmillis" 1619445706909 } { "customerid" "customer 321", "meterapiname" "computehours", "metervalue" 4, "metertimeinmillis" 1619445712341 } { "customerid" "customer 123", "meterapiname" "computehours", "metervalue" 1, "metertimeinmillis" 1619445783456 } code example import json from uuid import uuid1 from datetime import date import boto3 records to send = \[{ 'customerid' 'customer 123', 'meterapiname' 'computehours', 'metervalue' 5, 'metertimeinmillis' 1619445706909, 'dimensions' { 'region' 'us west 2', 'az' 'az1' } }] queue url = 'https //sqs us west 2 amazonaws com/624335419252/62 ingest fifo' sqs = boto3 client('sqs') sqs send message( queueurl=queue url, messagebody=json dumps(records to send), messagegroupid=str(uuid1()), messagededuplicationid=str(uuid1()), ) troubleshooting meter records sent to the sqs queue are first saved in an s3 bucket before being processed (see the aws s3 ) if there's a failure during processing a failure report file will be created in the s3 bucket with the reason for the error a dead letter queue is also set up to capture messages that could not be written to s3 this setup ensures reliable troubleshooting and traceability for all ingestion issues