Quick Start
5 min
get up and running in 3 easy steps getting started set up credentials and models for your llm providers create a workload to track and attribute usage and cost generate a virtual key to control access once these steps are complete, you can immediately make api calls through the gateway and see costs attributed to the correct workload you can create as many workloads and keys as needed to reflect teams, applications, customers, or environments step 1 set up providers & models in the left hand navigation, go to model management select theproviders tab and click add provider in the upper right enter a name for the provider select a provider such as azure openai, openai, aws bedrock, or others enter the required authentication details for the selected provider click save provider this provider credential is stored securely and used by the gateway to access the provider on your behalf add models in model management, click the models tab click add model in the upper right select the provider you just created choose a model available from that provider optionally set a model alias this is the name you will reference in api calls click save you now have a provider credential and at least one model configured in the gateway step 2 create a workload in the left hand navigation, go to access management, and select the workloads tab click create workload in the upper right enter a workload name review or edit the automatically generated workload id select the models this workload is allowed to access click create workload a workload represents the unit of attribution all usage, cost, and metrics generated through keys associated with this workload roll up here step 3 create a virtual key in access management, click the keys tab click add key select the workload you just created enter a name for the key click create key the key value is shown once copy it and store it securely if you lose it, you must generate a new key make your first api call use the virtual key in an api request to the gateway and specify one of the models allowed for the workload within a minute, usage and cost data will begin appearing in the dashboards, automatically attributed to the correct workload to make things easy you can use the model playground it provides a built in chat interface for easy testing from here, you can add more workloads for additional teams, apps, or customers apply custom rates for internal chargeback or external billing monitor spend, usage trends, and optimization opportunities enforce governance through controlled access and attribution that is all you need to start using the amberflo ai gateway
