AI Governance (Unified LLM Int...
Configure Unified LLM Interface
7 min
amberflo provides a unified ai gateway that lets you securely access, manage, and govern llm usage across multiple providers through a single interface at its core, amberflo standardizes how your applications interact with llms regardless of which provider you use, api calls made through the gateway follow an openai compatible format this allows you to test, switch, and compare models across providers with minimal changes to your application code the governance model in amberflo is built around four core concepts credentials models workloads virtual keys together, these define how access is granted, what models are available, and how usage is tracked unified access to llm providers amberflo acts as a single entry point for multiple llm providers such as openai, azure openai, aws bedrock, and others instead of embedding provider specific api keys and sdks directly into your applications, all requests are routed through the amberflo gateway this provides several advantages a consistent api interface across providers the ability to switch or add providers without refactoring application code centralized access control and governance no need to share provider credentials with application users or services all provider credentials are managed centrally within amberflo and never exposed to downstream consumers provider credentials secure provider access providers define how amberflo connects to an external llm provider for each provider you want to use, you create a credential in amberflo and supply the required authentication details, such as an api key or cloud credentials these provider credentials are securely stored and used by the gateway to make requests on your behalf providers are not used directly by applications or users instead, they act as the foundation that enables controlled access through models, workloads, and keys this separation allows you to rotate provider credentials, change providers, or adjust access without touching application code models defining what is available models represent the specific llms you want to make available through the gateway each model is associated with a provider credential and maps to a specific model offered by that provider you can configure as many models as needed, including models from different providers models are what workloads are granted access to by defining models explicitly, you control exactly which llms can be used and how they are referenced when making api calls workloads who is using the models workloads represent the consumers of llms in your system a workload might correspond to a chatbot a backend service a data processing pipeline a customer facing feature an internal tool or environment when you create a workload, you explicitly select which models it is allowed to access these models can come from multiple providers a single workload might use one model today and a different model tomorrow without changing how access is managed workloads are the primary unit of attribution and governance all usage and activity is tracked at the workload level virtual keys controlled gateway access virtual keys, also referred to as virtual keys, are how applications authenticate with the amberflo gateway each virtual key is tied to a specific workload when an application uses a key to make requests, it automatically inherits the workload’s model access rules the associated provider credentials centralized governance and tracking applications never receive direct provider api keys if a key needs to be rotated or revoked, it can be done without impacting provider credentials or other workloads how it all fits together conceptually, the flow looks like this you create providers to securely connect to llm providers like openai, aws bedrock, etc you define models that map to specific llms from those providers you create workloads to represent applications or consumers and assign allowed models you generate access keys for workloads and use those keys in your applications from that point on, all llm requests go through the amberflo gateway using a unified api, with access governed by workload and model configuration what’s next the following sections dive deeper into each of these concepts creating and managing credentials configuring models defining workloads generating and rotating access keys together, these form the foundation of ai governance in amberflo
