AI & LLM - GOVERNANCE & CONTRO...
AI Gateway Integration
I need an AI Gateway
13 min
if you do not already have an ai gateway, this guide walks you through deploying litellm using docker and connecting it to amberflo litellm is the first supported gateway, and additional gateways will be available soon once deployed, the gateway will push real time usage events to amberflo, enabling attribution, budgets, cost guards, dashboards, and full ai governance and control prerequisites you will need docker installed and running an amberflo account a postgres database (required by litellm for teams, virtual keys, and routing) create a working directory mkdir litellm gateway cd litellm gateway pull the litellm docker image and create a config pull the litellm proxy (ai gateway) image docker pull ghcr io/berriai/litellm\ v1 79 0 stable create a basic litellm config file in your folder and name it config yaml \# litellm proxy configuration (config yaml) \# general / proxy wide settings general settings litellm settings callbacks \ "amberflo litellm callback" download amberflo artifacts you can find the amberflo callback zip and the environment file for download in the ai gateway setup wizard download amberflo zip file (callback package) your personalized env file move the files into your working directory and unzip unzip \<amberflo zip filename> d your directory should now contain config yaml \<your env file> env amberflo/ init py (other callback files) what the env file contains this file provides your amberflo api key the amberflo ingest endpoint the account identifier callback batching / retry variables add the following items to the env file litellm master key a required secret used to encrypt and decrypt sensitive fields stored in the litellm database this key must be a long, random string changing it will invalidate previously encrypted data litellm salt key a cryptographic salt used for hashing and securing stored values this must also be a long, random string do not reuse the same value across environments ui username the username for logging into the litellm admin ui this is the credential used for the web dashboard, not for model authentication ui password the password for logging into the litellm admin ui choose a strong, random password if this value changes, existing sessions become invalid database url the full postgres connection string used by litellm’s prisma client it must be unquoted and begin with postgresql // it defines the database host, port, user, password, and database name store model in db this allows you to add models using the admin ui instead of only via the config this should be set to true litellm master key= litellm salt key= ui username= ui password= database url= store model in db=true do not commit this file to git create and configure the postgres database litellm requires postgres for teams virtual keys storing models create or provision a postgres instance construct your connection url postgres\ //\<username> \<password>@\<host> \<port>/\<db name> make sure your postgres db is running and then update your environment file to include the connection string database url postgresql //\<username> \<password>@\<host> \<port>/\<db name> start the litellm gateway container run docker run \\ \ env file env \\ \ volume /amberflo /app/amberflo\ ro \\ \ volume /config yaml /app/config yaml\ ro \\ \ publish 4000 4000 \\ ghcr io/berriai/litellm\ v1 79 0 stable \\ \ config /app/config yaml this will load the amberflo callback load your environment file connect litellm to postgres start the gateway on port 4000 complete provider and model configuration litellm will only emit usage once valid provider credentials and models are enabled follow the step by step instructions below to configure models and providers this includes setting model providers (openai, anthropic, bedrock, etc ) provider api keys default models, versions, and routing you can find step by step instruction on the following page docid\ wnokz5lhwnlhy5ysisv5h test the integration step 1 call the gateway use a virtual key assigned to a team curl http //\<your vm ip> 4000/chat/completions \\ h "authorization bearer \<your virtual key>" \\ h "content type application/json" \\ d '{"model" "\<model id>","messages" \[{"role" "user", "content" "how are tokens calculated?"}]}' step 2 verify in amberflo log in to amberflo and check ai spend dashboard business units cost breakdown pages events should appear in near real time automatic business unit creation amberflo automatically creates a new business unit the first time a virtual key is used mapping litellm team name → business unit name litellm team id → business unit id all future events for that key are attributed to that business unit you can rename business units later if needed
