AI Governance (Unified LLM Int...
Configure Unified LLM Interfac...
LLM Models
7 min
models are one of the core building blocks of ai governance in amberflo they define which specific llms are available through the gateway and act as the first layer of control over access to provider capabilities rather than handing out raw provider api keys and implicitly granting access to everything a provider offers, amberflo allows you to explicitly choose the subset of models that can be used this gives you control, flexibility, and a clean abstraction layer between your applications and underlying providers why models exist most llm providers expose dozens of models, each with different cost, performance, and capability characteristics giving applications unrestricted access to all of them is rarely desirable models allow you to explicitly control which provider models are available standardize how models are referenced across applications swap underlying models without changing application code establish the first level of access restriction before workloads are applied models define what the gateway can access workloads define who is allowed to access them viewing existing models to manage models go to model management in the left hand navigation select the models tab you will see a list of all models currently configured in the gateway for each model, the list shows provider model name model alias credential used to access the provider creation date this view represents the complete set of models that the amberflo gateway is capable of routing requests to deleting models you can delete models directly from the list deleting a model removes access to that model entirely any workload or access key that previously relied on the model will no longer be able to use it if a model is actively in use, deleting it will immediately break access for those consumers before deleting a model, make sure it is no longer required by any workloads adding a new model to add a model click add model in the upper right select a credential that you previously created the credential list displays both the provider and the credential name, making it easy to select the correct one choose a model from the list of models available for that provider some providers expose a large number of models you can use the search field to quickly filter the list by typing part of the model name select the model once selected, the alias field will appear and will default to the provider’s model name model aliases a model alias lets you abstract provider specific model names away from your application code instead of hardcoding a specific model name in your frontend or service, you reference the alias when making api calls through the gateway for example alias support chatbot model underlying model gpt 5 pro later, you can replace the underlying model with a different one while keeping the same alias your application code does not need to change, but the model behavior, cost, or provider can this makes it significantly easier to experiment with different models upgrade or downgrade models switch providers manage cost or performance tradeoffs centrally saving the model once you have selected the model and set an alias, click save the model will now appear in the models list and will be available for assignment to workloads models vs workloads it is important to understand the distinction models define what the gateway has access to workloads define which models a specific consumer is allowed to use adding a model does not automatically grant access to it access is only granted when the model is explicitly assigned to a workload what’s next after configuring models, the next step is to create workloads workloads represent the consumers of llms and allow you to restrict model access, track usage, and apply governance at a granular level continue to the workloads section to define how models are used by your applications
