AI FinOps - What's changed from 2025 and how do we charge for AI?

FinOps practices need to evolve to keep pace with new-age cloud economics driven by AI adoption

2026 FinOps - What's changed from 2025 and how do we charge for AI

FinOps is hugely important for cost control, still. But, AI is starting to break some of the fundamental assumptions.

Over the last 2 years, our team at Meritos has delivered many cost and value optimisation projects – common themes include: supplier management; visible alignment of technology spend to business outcomes and clarity on IT team roles. Delivering quick wins in these areas very often comes back to the cloud lifecycle and how well this is understood and managed, FinOps in other words. Not much of a revelation. But FinOps is changing.

FinOps is still foundational (but no longer sufficient)

Cost optimisation, of which FinOps is a part, is about just that: visibility of spend; controlling that spend and making decisions about change based on aligned outcomes. This is still important but in 2026 the needle is shifting quickly from cost optimisation to value optimisation.

Why does the current FinOps model break for AI?

Organisations still need traditional FinOps because AI doesn’t run everything (yet) but for those cloud workloads which do use AI, looking at the usual links of technology elements X usage of those elements = cost and assessing that based on the actual (or expected) business outcome is not going to work. That’s because AI is unpredictable. The workloads are spiky and very often (at the moment) experimental. We don’t really know how long it’s going to take to train a model and attributing the technology costs to individual business outcomes is hard as the platforms (LLM, data pipelines, GPU clusters) are more fundamentally shared by design than previous capabilities. This applies to the models themselves as well as the infrastructure as foundational models may well be shared across multiple business areas.

We’ve worked with organisations who are proactively looking at this issue, particularly how to meet the needs of finance at the same time as driving the business and enabling a very dynamic ‘test and learn’ approach to support building understanding of AI and proving its capabilities in the specific business context of that organisation. Some are reviewing or implementing a chargeback model and others are looking at how best to deliver the capability as a community.

What happens when current models are used to manage AI?

What we’ve seen is that the usual FinOps measures applied to AI often lead to:

  • Over-control, which does not support innovation
  • Under-control, which often leads to spiralling costs

GPUs change the economics

A big part of the challenge around AI is the economics of GPUs. They are quite distinct, in that they are much scarcer and don't scale as linearly as typical cloud infrastructure.

It becomes an easy trap to over-provision or reserve excess capacity due to the twin drivers of scarcity and unknown demand. So, how does an organisation manage scarce GPU availability, drive innovation in AI and control costs at the same time?

Guardrails, not handbrakes

We’d suggest a key part of the answer is to have clear guardrails around the use of and investment in AI across training and ongoing use. These should exist at two levels and are in additional to any existing guardrails.

Per-initiative guardrails include ensuring each initiative can:

  • Define what it is trying to learn or achieve
  • Objectively state success and failure criteria
  • Agree and document clear decision points around stop / pivot / scale

Organisational guardrails should cover:

  • Controls around access to high-cost resources (technology and human)
  • Utilisation monitoring of those same resources
  • Prioritisation of projects linked to potential business outcomes

These shift the conversation more towards ‘where should we place our bets’.

Changing business behaviour

It’s important to note that chargeback and showback on their own are unlikely to change behaviour, such is the current focus on AI. The organisational guardrails above help to ensure that the organisation as a whole is making decisions around where to place bets on AI, rather than individual business units which often have very different budgets. Where resources are scarce, these exec-level guidelines are needed.

Metrics that help define AI value

Some clear AI metrics that support a more holistic look at the use of resources and alignment to business outcomes could include:

  • Cost per prediction
  • Cost per insight
  • Cost per automated decision

Many organisations already have elements of these guardrails in place. The challenge now is to re-examine them through an AI lens and ensure FinOps evolves from controlling cost to enabling value.

That’s where the real shift into 2026 begins.

Join 100+ subscribers

Get the latest news, blogs, and resources in your inbox monthly.

Subscribe Now
Your data is in the safe hands. Read our Privacy policy.

Thanks for subscribing!

You should receive a confirmation email shortly and we'll now send you new Perspectives as soon as they are published.

Oops! Something went wrong while submitting the form.