Write events directly to an Amberflo supplied and secured AWS SQS queue

Amberflo provides you with an AWS SQS queue (FIFO) with access rights and controls to write meters.

It automatically picks up the meters for processing as they arrive into the queue.

Please contact us to get the SQS queue and S3 bucket provisioned for your account.


The meter records you send to the SQS queue should be of the same standardized format as accepted by the ingest API. Here are some examples:

     "customerId": "customer-123",
     "meterApiName": "ComputeHours",
     "meterValue": 5,
     "meterTimeInMillis": 1619445706909,
     "dimensions": {
        "region": "us-west-2",
        "az": "az1"

We also support NDJSON format (JSON separated by a newline)

{ "customerId": "customer-123", "meterApiName": "ComputeHours", "meterValue": 5, "meterTimeInMillis": 1619445706909 }
{ "customerId": "customer-321", "meterApiName": "ComputeHours", "meterValue": 4, "meterTimeInMillis": 1619445712341 }
{ "customerId": "customer-123", "meterApiName": "ComputeHours", "meterValue": 1, "meterTimeInMillis": 1619445783456 }

Code Example

import json
from uuid import uuid1
from datetime import date

import boto3

records_to_send = [{
     'customerId': 'customer-123',
     'meterApiName': 'ComputeHours',
     'meterValue': 5,
     'meterTimeInMillis': 1619445706909,
     'dimensions': {
        'region': 'us-west-2',
        'az': 'az1'

queue_url = 'https://sqs.us-west-2.amazonaws.com/624335419252/62-ingest.fifo'

sqs = boto3.client('sqs')



The records sent to the queue will be saved in an S3 bucket before being processed (see the AWS S3 integration).

So troubleshooting just follows the same pattern, i.e. a file will be created in S3 with the failure reason.

We also set up a dead-letter queue in case there are issues writing to the S3 bucket.