AI - GOVERNANCE & CONTROL
AI Gateway Integration
I need an AI Gateway
13 min
if you do not already have an ai gateway, this guide walks you through deploying litellm using docker and connecting it to amberflo litellm is the first supported gateway, and additional gateways will be available soon once deployed, the gateway will push real time usage events to amberflo, enabling attribution, budgets, cost guards, dashboards, and full ai governance and control prerequisites you will need docker installed and working an amberflo account with access to settings → ai gateway a postgres database (required by litellm for teams, virtual keys, and routing) create a working directory mkdir p /litellm gateway cd /litellm gateway pull the litellm docker image and create a config pull the litellm proxy (ai gateway) image docket pull ghcr io/berriai/litellm\ v1 79 0 stable create a basic litellm config file you will update this file later when setting provider credentials and model settings download amberflo artifacts in your amberflo account go to settings → ai gateway download amberflo zip file (callback package) your personalized env file move the files into your working directory and unzip unzip \<amberflo zip filename> d your directory should now contain config yaml \<your env file> env amberflo/ init py (other callback files) what the env file contains this file provides your amberflo api key the amberflo ingest endpoint the account identifier callback batching / retry variables do not commit this file to git create and configure the postgres database litellm requires postgres for teams virtual keys storing models create or provision a postgres instance construct your connection url postgres\ //\<username> \<password>@\<host> \<port>/\<db name> add this to your litellm config (config yaml) or supply it as an environment variable database url "postgres\ //\<username> \<password>@\<host> \<port>/\<db name>" start the litellm gateway container run docker run \\ \ env file /\<your env file> \\ v $(pwd)/config yaml /app/config yaml \\ v $(pwd)/amberflo /app/amberflo \\ p 4000 4000 \\ \<litellm docker image> this will load the amberflo callback load your environment file connect litellm to postgres start the gateway on port 4000 check the logs for confirmation docker logs \<container id> you should see entries indicating that the amberflo callback loaded successfully complete provider and model configuration litellm will only emit usage once valid provider credentials and models are enabled follow the step by step instructions below to configure models and providers 👉 https //enterprise reference amberflo io/ai gateway setup this includes setting model providers (openai, anthropic, bedrock, etc ) provider api keys default models, versions, and routing test the integration step 1 call the gateway use a virtual key assigned to a team curl http //\<vm ip address> 4000/chat/completions \\ h "authorization bearer {virtual key}" \\ h "content type application/json" \\ d '{ "model" "{model identifier}", "messages" \[ {"role" "user", "content" "how are tokens calculated?"} ] }' step 2 verify in amberflo log in to amberflo and check ai spend dashboard business units cost breakdown pages events should appear in near real time automatic business unit creation amberflo automatically creates a new business unit the first time a virtual key is used mapping litellm team name → business unit name litellm team id → business unit id all future events for that key are attributed to that business unit you can rename business units later if needed
