Why cost savings through cloud repatriation are a myth

For organisations under cost pressure, “leaving the cloud” sounds decisive. From our experience, it’s a distraction at best for most.

Why cost savings through cloud repatriation are a myth (mostly)

There’s a strange trend at play these days. Many organisations went to the cloud to cut costs and move faster. A few years later, some now argue that going back on-prem will do the same. Both can be true in specific cases—but as a broad strategy, swinging between extremes wastes time and money. For organisations under cost pressure, “leaving the cloud” sounds decisive. From our experience, it’s a distraction at best for most organisations. We think the biggest, fastest,and lowest-risk savings come from improving architectural practices and running real FinOps—not forklift workloads back on-prem. This isn’t about cheerleading for 'Cloud', it’s about doing things in the right sequence and for maximum impact.

Let’s debunk some cloud repatriation myths first

  • Myth 1: On-prem is always cheaper. It can be—if you already have spare capacity, disciplined operations, cheap power, and stable workloads. Many organisations don’t. Hidden costs (hardware refresh, facilities, DR, 24/7 ops) return quickly.
  • Myth 2: Repatriation is fast. It simply isn’t. The planning, procurement, migration, and retraining cycles are long. Meanwhile, your current bill will keep ticking.
  • Myth 3: Cloud cost = Vendor pricing. In reality, design and behaviour drive most spend. Think about chatty data flows, unbounded storage, idle fleets, and “temporary” proof-of-concepts that never died.

Don't get me wrong. There are repatriation wins to be had - especially for large, predictable compute (e.g., steady batch ETL, video encoding, model training at fixed schedules) when you’ve nailed capacity planning and can sweat assets. But most firms haven’t earned that level of maturity yet. Start where the money leaks now!

Triage workloads (keep, migrate, optimise)

Start with triage: fix the most expensive and fast-changing workloads first—don’t try to move everything at once. Take one portfolio view across business domains and sort by run-rate and change-rate:

  • Keep (and optimise) in cloud: variable, spiky, or customer-facing services; anything needing elastic scale or global reach.
  • Migrate within cloud: lift-and-shift zombies to managed services; kill pet VMs; move to serverless where steady state is low.
  • Consider on-prem or edge: heavy, predictable compute with tight data locality or extreme egress fees; regulatory niches that truly demand physical control. Run a tight business case with all costs in.

Make decisions in 60-90-day waves. Savings land faster and confidence grows with each pass.

Architecture patterns that optimise spend

You don’t need a cloud rewrite to see meaningful savings; you may need a few focussed interventions.

  • Keep data and apps close: Put services near the data they use so you’re not paying to move it around the world. Use content caches (CDNs) for frequently read files and pages. Trim cross-region chatter.
  • Use the right storage shelf: Most firms pay “hot” prices for “cold” data. Set simple rules so old files drift to cheaper shelves, and test how fast you can bring them back.
  • Only run things when you need them: Replace always-on pollers and timers with simple events and schedules so systems sleep between bursts.
  • Right-size by design: Favour smaller machines with automatic scaling over onebig box that idles. Build for small, replaceable parts so you don’t feel forced to over-size.
  • Prune the garden monthly: Delete orphaned disks, stale backups, unused addresses and “temporary” test systems. A 60-90 minute clean-up can save real money.

FinOps levers that can trim cloud bills within the quarter

FinOps is not merely a dashboard of historical data; it should be about decisions and owners

  • Commitments: buy discounted capacity for the steady, predictable part of your usage. Start small and add monthly.
  • Rightsizing: set simple rules to flag idle or underused resources; make weekly approvals and turn-offs a habit.
  • Scheduling: switch off non-production environments at nights and weekends if you can; give every test area an expiry date.
  • Guardrails: budgets and alerts per team; avoid expensive machine types and cross-region data moves by default; enforce tags so spend has an owner.
  • Make costs visible: show unit costs in business language (per order, per sign-up, per model run). When teams see this, behaviour changes fast.

Run-state governance that makes savings sustainable

Wins start to fade if governance is optional. It’s best to keep it simple and visible.

  • One page per domain: run-rate, monthly change, top 5 drivers, top 5 actions.
  • Blended scorecards: a few delivery metrics (lead time, change fail rate) alongside cost/unit. If stability collapses, cost wins won’t last.
  • Weekly 30-minute review: product, engineering, and finance look at deltas, approve actions, and clear blockers.
  • Monthly portfolio steer: rebalance commitments, set new guardrails, and agree next 90-day wave.

The bottom line

Repatriation can make sense in specific cases but treating it as the default response will slow savings and add risk. Most organisations will do better—faster—by fixing their architecture seams, applying FinOps discipline, and governing the run-state like it matters. Once you’ve done that work (and only then), revisit the handful of workloads where on-prem might truly win. Make it a business decision, not a headline.

Join 100+ subscribers

Get the latest news, blogs, and resources in your inbox monthly.

Subscribe Now
Your data is in the safe hands. Read our Privacy policy.

Thanks for subscribing!

You should receive a confirmation email shortly and we'll now send you new Perspectives as soon as they are published.

Oops! Something went wrong while submitting the form.