See Where the Money Goes
AI Spend Dashboards
AI Spend Analytics
11 min
the ai spend analytics page is where you analyze why you are spending what you are spending the summary page tells you what happened analytics tells you how and where it happened everything here is still driven directly from ai gateway usage no estimates no billing lag time range and scope all data on this page is scoped to the selected date range in the upper right supported ranges week to date month to date quarter to date year to date last 7, 14, or 30 days custom date range changing the date range recalculates all charts, metrics, and breakdowns on the page spend by ai provider (top section) at the top of the page you see spend by ai provider over time what this shows daily spend by provider such as openai, anthropic, amazon bedrock, azure ai, and others a stacked or overlaid view that makes provider mix changes obvious trends that reveal routing changes, adoption shifts, or provider concentration spend vs tokens toggle by default, the chart shows spend you can switch to tokens to view raw token usage by provider instead this distinction matters spend highlights financial impact tokens highlight usage volume high token usage with low spend often indicates efficient models low usage with high spend usually means expensive models why provider analytics matters most teams intentionally use multiple frontier model providers to access best in class models as they are released avoid vendor lock in balance performance, cost, and availability this chart helps you validate that strategy and detect drift toward a single provider daily overview metrics (right panel) on the right side of the page you get fast, daily context total ai spend daily spend trend useful for spotting spikes or drops immediately total ai requests number of requests flowing through the gateway per day separates traffic growth from pricing effects input and output tokens daily input tokens daily output tokens makes prompt growth and response inflation obvious spend per ai request average cost per request for the selected time range one of the fastest ways to detect inefficiency if spend per request is rising while request count is flat, your prompts or model choices are getting more expensive spend breakdown by dimension (bottom section) the bottom half of the page lets you break down spend daily across multiple dimensions default view by default, spend is broken down by model, showing daily spend bars total spend per model for the selected time range this immediately highlights expensive models usage consistency sudden behavior changes filters and dimensions you can change how spend is broken down using filters supported dimensions business unit platform use case provider model region token type (input or output) you can select one or more filters and apply them to re slice the data common examples spend by business unit instead of model spend by provider and region spend by use case across models input vs output token cost analysis this flexibility is intentional there is no single correct view of ai spend applying filters select the dimensions and values you want to analyze click apply filters the charts update immediately for the selected date range filters persist until changed or cleared how this page is meant to be used use the ai spend analytics page to understand why spend is changing compare providers and models over time detect inefficient routing or model selection validate multi provider strategy identify candidates for alerts, limits, or optimization this page is for operators, platform teams, and finops or ai governance owners who need precision, not summaries
