CLOUD - GOVERNANCE & CONTROL
...
AWS
Connect Manually
9 min
here are the steps to manually set up your aws integration with amberflo prerequisites you must be signed in to your aws account if you manage multiple aws accounts, sign in using the root or billing account to ensure you are granting access to the data for your entire aws footprint you will need an s3 bucket we recommend creating a new bucket specifically for this purpose, but you may use an existing one if preferred this bucket is where aws will store the report, and where amberflo will retrieve it from if you plan to create a new bucket you do not need to make it ahead of time you will have the option as part of the data export setup during the setup process, you will define the following values be sure to record them, as you will need to share these with amberflo to complete the integration bucket name the name of the s3 bucket where aws will store your cost and usage reports prefix name (s3 path prefix) this acts like a folder path within your bucket and determines where the files will be placed for example, if you choose amberflo reports, your reports will be stored at s3 //your bucket name/amberflo reports/ important this prefix does not need to be a physical folder you create aws will create it automatically when storing the files choose a clear and descriptive prefix to make it easier to identify and manage in the future do not include leading or trailing slashes (e g , use amberflo reports, not /amberflo reports/) export name a label you choose to name the export in aws this will appear in your “exports and dashboards” list step 1 create the data export log into the aws console and navigate to “billing and cost management" on the left hand menu you will see a section called "cost and usage analysis" select "data exports" choose "create" create export choose "standard data export" if it is not already selected for you choose an export name this is the name that will appear in the “exports and dashboards” list choose a descriptive name to easily identify the report (e g , monthly report) save this name as you will need to provide it to amberflo in a later step data table content settings select "focus 1 0 with aws columns" leave the default column selection it should default to all the columns being selected data export delivery options compression type and file format choose "parquet" if it is not already selected for you file versioning select "overwrite existing data export file" data export storage settings click the "configure" button and it will launch a popup you will have the option to create a new s3 bucket or use an existing one ⚠️ note if you use an existing bucket, aws will update its policy make sure that this does not change permissions you don’t want to lose set the s3 path prefix this acts as a virtual folder within the bucket where reports will be stored be sure to save this prefix , as you will need to provide it to amberflo in a later step click "create" step 2 update s3 bucket permissions for amberflo to access this data you will need to update the permissions on the s3 bucket navigate to s3 select the bucket choose the “permissions” tab in the section named “bucket policy” , choose “edit” update the following json to replace the \<bucket name> placeholder with your bucket name and then add it to the array under “statement” make sure that you add this and do not overwrite the existing statement you will need to make sure there is a comma between the statements to keep it as valid json { 	"sid" "read", 	"effect" "allow", 	"principal" { 	 "aws" "arn\ aws\ iam 994970626209\ root" 	}, 	"action" \[ 	 "s3\ listbucket", 	 "s3\ getobject" 	], 	"resource" \[ 	 "arn\ aws\ s3 \<bucket name>/ ", 	 "arn\ aws\ s3 \<bucket name>" 	] } step 3 provide bucket details to amberflo to complete the process you need to provide amberflo the following information aws region (us west 2, ap southeast 1, etc) bucket name prefix name export name

