AI - GOVERNANCE & CONTROL
AI Gateway - LiteLLM
Testing your integration
16 min
if your ai gateway is correctly connected to amberflo, every call you make through the gateway to an llm will generate a meter event in amberflo a meter event is a structured log that records what was used, how much, by whom, and when seeing this event in amberflo is the definitive confirmation that your integration is working to produce a meter event, your gateway must have the following in place at least one model configured a provider configured or an api key supplied directly when defining the model at least one team, which amberflo automatically converts into a business unit or cost center a virtual key associated with that team this is the credential you will use when making the api call once these are set up, you will make a chat completion request through the ai gateway using the virtual key that request will trigger a meter event that should appear in amberflo within a short time if you already have all of these elements configured in your existing lightllm deployment, skip to the section making your first api call if not, follow the steps below to create a provider, model, team, and virtual key add llm provider(s) you will need to provide llm provider credentials using the web interface llm provider credentials are the authentication details required to connect the ai gateway to your llm providers, such as openai, aws bedrock, azure openai, and others these credentials are stored securely and used by the gateway to route requests to the correct provider adding provider credentials enables you to centralize access management and eliminates the need for individual users or teams to manage their own keys open the models + endpoints tab from the left hand navigation menu click the llm credentials tab click add credential select the provider you want to configure (e g , openai) fill in the required fields credential name a friendly name for this provider instance (e g , openai default) provider select the provider you want to add an api key for from the drop down api key the access token or api key for the provider when you have entered these fields click add credential you can add additional providers by simply repeating the steps above after setting up your providers, the next step is to add models that reference those providers and specify the configuration details for each add llm model name(s) after providers are set up, models determine the specific endpoints the gateway makes available a model configuration links a model name (for example, gpt 5o) to a provider and any associated credentials or parameters this allows multiple providers or variants of the same model to be managed consistently through the gateway open the models + endpoints tab from the left hand navigation menu click on the add model tab choose the provider from the list this will populate the list for the model names click on the list to the right of where it says model name(s) this will be a long list with every model available use the fuzzy search to narrow down the list to find the model you are looking for and click on it you will see the model added in the search bar space this menu is multi select so you can add all of the models from this provider at once you will see the models you’ve selected automatically show up in the model mappings table this will allow you to use a different public name than the model identifier that is used under the hood our recommendation is to create credentials to use here as opposed to entering the api keys as part of the model setup click in the box to the right of where it says existing credentials and select the appropriate credentials from the list when done, click the add model button in the bottom right of the page once a model is set up, it becomes available for use through the ai gateway’s openai compatible api any virtual api key with access to that model can now send requests to it, and the gateway will automatically route the call to the correct provider using the associated credentials set up teams (cost centers) teams in the ai gateway let you group users and api keys under a shared identity for easier access control and cost management teams will map to business units (a k a cost centers) in amberflo you can assign budgets, rate limits, and model permissions at the team level, ensuring consistent policies across multiple users teams are useful for managing access by department, project, or external partner while tracking usage and spend in a centralized way add a team via the web interface open the teams link from the left hand navigation menu click create new team enter a name for your team a team can be used simply as a tool to help organize users the name is the only required field ⚠️ it is extremely important that the name of the team match exactly to the business unit id set in the amberflo app this is how we allocate the costs the allowed character set is alphanumeric characters as well as (underscore) (dash) (point) + (plus) @ (at) it must start and end with an alphanumeric character max 200 characters ⚠️ if that is all you are looking for then you can simply hit the create team button at the bottom right of the window otherwise continue to the team limits section below team limits (optional) you can set limits for your team using the following mechanisms models you can choose to add only a sub set of the models you have added to the gateway for this team to access for example, you may have use cases that require quicker smaller responses and so you may limit this team to only accessing smaller models like gtp 5 nano max budget you can restrict a team to a set dollar amount and their access will be blocked when they exceed that budget reset budget this allows you to specify how often this budget will reset the options are daily, weekly and monthly tokens per minute should the team be limited on how many tokens they can use per minute? this can be used as a throttling method requests per minute this is another method for throttle this can be especially useful during the development process where it is easy for costs to get out of hand due to misconfigurations once you’ve set all the values you are interested in you should click create team in the bottom right of the window virtual keys virtual keys are how you will provide access to the resources you make available in the ai gateway virtual keys will also help give you a more granular view into your usage and costs for each use case you should create a new virtual key for example, you should create one for each environment that an application would be deployed in this makes it easy to see the cost breakdown between development and production environments virtual keys can also be created to grant admin access to the ai gateway for example, you can create a virtual key that would provide access to the ai gateway so you can have existing internal tools make changes to the gateway adding virtual keys via the web interface open the virtual keys link from the left hand navigation menu click create new key select you if you want to assign this key to a team you previously created, select the team from the team dropdown if not, you do not need to select anything here select the models you would like this key to have access to if you want this key to have access to all the models you do not need to select them individually you can choose all team models finally, choose the key type is this key only for accessing llms, management apis or both if the option is both, choose default otherwise you can narrow its access to one of the two use cases click create key and you will get a popup that will show you the key value and allow you to copy it you must copy it now because it will not show the key again you will need to create a new one making your first api call now we have all the pieces to make an api call to one of the models we’ve set up and see the data captured simply make a call to /chat/completions and you should receive a response as well as see the activity logged with a meter event you can use the following curl command as a template to test it out replace the virtual key placeholder with the key you created in the previous step and replace the model placeholder with a model you have set up curl http //\<vm ip address> 4000/chat/completions \\ h "authorization bearer {virtual key}" \\ h "content type application/json" \\ d '{ "model" "{model identifier}", "messages" \[ {"role" "user", "content" "how are tokens calculated?"} ] }' verifying the data in amberflo if your api call returned a success response, the final step is confirming that amberflo received and processed the meter event log in to amberflo navigate to the ai usage and cost dashboard within two minutes, you should see at least one new usage entry corresponding to the call you just made through the gateway you should expect to see the model you called the provider (for example, openai or azure openai) input and output token counts the team mapped to a business unit cost (if you have configured contracted or public rates) if you see the event in amberflo, your integration is confirmed if you do not see the event within two minutes, use the troubleshooting section below troubleshooting api call failures if the api call itself failed, the issue is almost always within the ai gateway configuration, not amberflo these are the common litellm failure modes and their likely causes authentication errors examples 401 unauthorized invalid api key missing or invalid virtual key causes wrong virtual key in the authorization bearer … header virtual key not associated with the team virtual key disabled or expired typo in the header or missing “bearer” fix reconfirm the virtual key via /v1/keys ensure it is tied to the correct team retry the call provider errors examples provider not found provider is not configured missing provider api key causes provider was never created provider exists but has no api key api key is incorrect or expired model references a provider that doesn’t exist fix check the provider list get /v1/providers re enter the provider’s api key recreate the provider if needed model errors examples model not found invalid model name unsupported model causes model was not created model name doesn’t match the provider’s naming model references the wrong provider provider does not support the specified model fix check models get /v1/models confirm the model name exactly matches what the provider expects recreate the model with correct provider linkage team or virtual key errors examples team not found key not associated with a team causes team not created virtual key created but not assigned to any team virtual key assigned to the wrong team fix check teams get /v1/teams check keys get /v1/keys reassign or recreate the key as needed outbound network errors examples the api call succeeds, but no meter events show in amberflo within two minutes logs show errors hitting the amberflo callback endpoint causes firewall blocking outbound https incorrect amberflo ingest url missing or incorrect litellm master key or salt key gateway cannot reach amberflo due to corporate proxy rules fix curl test the amberflo ingest endpoint from the host re confirm environment variables check logs for connection failures ensure outbound traffic to https is allowed if none of the above applies and you still do not see events in amberflo, the next place to check is the gateway logs litellm logs are usually explicit about configuration issues missing keys, misnamed models, or connection errors
