M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
The Cloud Promise Is Broken
0:00
-20:56

The Cloud Promise Is Broken

You’ve probably heard the promise: move to the cloud and you’ll get speed, savings, and security in one neat package. But here’s the truth—many organizations don’t see all those benefits at the same time. Why? Because the cloud isn’t a destination. It’s a moving target. Services change, pricing shifts, and new features roll out faster than teams can adapt.

In this podcast, I’ll explain why the setup phase so often stalls, where responsibility breaks down, and the specific targets you can set this quarter to change that.

First: where teams get stuck.

Why Cloud Migrations Never Really End

When teams finally get workloads running in the cloud, there’s often a sense of relief—like the hard part is behind them. But that perception rarely holds for long. What feels like a completed move often turns out to be more of a starting point, because cloud migrations don’t actually end. They continue to evolve the moment you think you’ve reached the finish line.

This is where expectations collide with reality. Cloud marketing often emphasizes immediate wins like lower costs, easy scalability, and faster delivery. The message can make it sound like just getting workloads into Azure is the goal. But in practice, reaching that milestone is only the beginning. Instead of a stable new state, organizations usually encounter a stream of adjustments: reconfiguring services, updating budgets, and fixing issues that only appear once real workloads start running. So why does that finish line keep evaporating? Because the platform itself never stops changing.

I’ve seen it happen firsthand. A company completes its migration, the project gets celebrated, and everything seems stable for a short while. Then costs begin climbing in unexpected ways. Security settings don’t align across departments. Teams start spinning up resources outside of governance. And suddenly “migration complete” has shifted into nonstop firefighting. It’s not that the migration failed—it’s that the assumption of closure was misplaced.

Part of the challenge is the pace of platform change. Azure evolves frequently, introducing new services, retiring old ones, and updating compliance tools. Those changes can absolutely be an advantage if your teams adapt quickly, but they also guarantee that today’s design can look outdated tomorrow. Every release reopens questions about architecture, cost, and whether your compliance posture is still solid.

The bigger issue isn’t Azure itself—it’s mindset. Treating migration as a project with an end date creates false expectations. Projects suggest closure. Cloud platforms don’t really work that way. They behave more like living ecosystems, constantly mutating around whatever you’ve deployed inside them. If all the planning energy goes into “getting to done,” the reality of ongoing change turns into disruption instead of continuous progress.

And when organizations treat migration as finished, the default response to problems becomes reactive. Think about costs. Overspending usually gets noticed when the monthly bill shows a surprise spike. Leaders respond by freezing spending and restricting activity, which slows down innovation. Security works the same way—gaps get discovered only during an audit, and fixes become rushed patch jobs under pressure. This reactive loop doesn’t just drain resources—it turns the cloud into an ongoing series of headaches instead of a platform for growth.

So the critical shift is in how progress gets measured. If you accept that migration never really ends, the question changes from “are we done?” to “how quickly can we adapt?” Success stops being about crossing a finish line and becomes about resilience—making adjustments confidently, learning from monitoring data, and folding updates into normal operations instead of treating them like interruptions.

That mindset shift changes how the whole platform feels. Scaling a service isn’t an emergency; it’s an expected rhythm. Cost corrections aren’t punishments; they’re optimization. Compliance updates stop feeling like burdens and become routine. In other words, the cloud doesn’t stop moving—but with the right approach, you move with it instead of against it.

Here’s the takeaway: the idea that “done” doesn’t exist isn’t bad news. It’s the foundation for continuous improvement. The teams that get the most out of Azure aren’t the ones who declare victory when workloads land; they’re the ones who embed ongoing adjustments into their posture from the start.

And that leads directly to the next challenge. If the cloud never finishes, how do you make use of the information it constantly generates? All that monitoring data, all those dashboards and alerts—what do you actually do with them?

The Data Trap: When Collection Becomes Busywork

And that brings us to a different kind of problem: the trap of collecting data just for the sake of it.

Dashboards often look impressive, loaded with metrics for performance, compliance, and costs. But the critical question isn’t how much data you gather—it’s whether anyone actually does something with it. Collecting metrics might satisfy a checklist, yet unless teams connect those numbers to real decisions, they’re simply maintaining an expensive habit.

Guides on cloud adoption almost always recommend gathering everything you can—VM utilization, cross-region latency, security warnings, compliance gaps, and cost dashboards. Following that advice feels safe. Nobody questions the value of “measuring everything.” But once those pipelines fill with numbers, the cracks appear. Reports are produced, circulated, sometimes even discussed—and then nothing changes in the environment they describe.

Frequently, teams generate polished weekly or monthly summaries filled with charts and percentages that appear to give insight. A finance lead acknowledges them, an operations manager nods, and then attention shifts to the next meeting. The cycle repeats, but workloads remain inefficient, compliance risks stay unresolved, and costs continue as before. The volume of data grows while impact lags behind.

This creates an illusion of progress. A steady stream of dashboards can convince leadership that risks are contained and spending is under control—simply because activity looks like oversight. But monitoring by itself doesn’t equal improvement. Without clear ownership over interpreting the signals and making changes, the information drifts into background noise. Worse, leadership may assume interventions are already happening, when in reality, no action follows.

Over time, the fatigue sets in. People stop digging into reports because they know those efforts rarely lead to meaningful outcomes. Dashboards turn into maintenance overhead rather than a tool for improvement. In that environment, opportunities for optimization go unnoticed. Teams may continue spinning up resources or ignoring configuration drift, while surface-level reporting gives the appearance of stability.

Think of it like a fitness tracker that logs every step, heartbeat, and sleep cycle. The data is there, but if it doesn’t prompt a change in behavior, nothing improves. The same holds for cloud metrics: tracking alone isn’t the point—using what’s tracked to guide decisions is what matters. If you’re already monitoring, the key step is to connect at least one metric directly to a specific action. For example, choose a single measure this week and use it as the trigger for a clear adjustment.

Here’s a practical pattern: if your Azure cost dashboard shows a virtual machine running at low utilization every night, schedule automation to shut it down outside business hours. Track the difference in spend over the course of a month. That move transforms passive monitoring into an actual savings mechanism. And importantly, it’s small enough to prove impact without waiting for a big initiative.

That’s the reality cloud teams need to accept: the value of monitoring isn’t in the report itself, it’s in the decisions and outcomes it enables. The equation is simple—monitoring plus authority plus follow-through equals improvement. Without that full chain, reporting turns into background noise that consumes effort instead of creating agility. It’s not visibility that matters, but whether visibility leads to action.

So the call to action is straightforward: if you’re producing dashboards today, tie one item to one decision this week. Prove value in motion instead of waiting for a sweeping plan. From there, momentum builds—because each quick win justifies investing time in the next. That’s how numbers shift from serving as reminders of missed opportunities to becoming levers for ongoing improvement.

But here’s where another friction point emerges. Even in environments where data is abundant and the will to act exists, teams often hit walls. Reports highlight risks, costs, and gaps—but the people asked to fix them don’t always control the budgets, tools, or authority needed to act. And without that alignment, improvement slows to a halt.

Which raises the real question: when the data points to a problem, who actually has the power to change it?

The Responsibility Mirage

That gap between visibility and action is what creates what I call the Responsibility Mirage. Just because a team is officially tagged as “owning” an area doesn’t mean they can actually influence outcomes. On paper, everything looks tidy—roles are assigned, dashboards are running, and reports are delivered. In practice, that ownership often breaks down the moment problems demand resources, budget, or access controls.

Here’s how it typically plays out. Leadership declares, “Security belongs to the security team.” Sounds logical enough. But then a compliance alert pops up: a workload isn’t encrypted properly. The security group can see the issue, but they don’t control the budget to enable premium features, and they don’t always have the technical access to apply changes themselves. What happens? They make a slide deck, log the risk, and escalate it upward. The result: documented awareness, but no meaningful action.

This is how accountability dead zones form. One team reports the problem but can’t fix it, while the team able to fix it doesn’t feel direct responsibility. The cycle continues, month after month, until things eventually escalate. That pattern can lead to audits, urgent remediation projects, or costly interruptions—but none of it is caused by a lack of data. It’s caused by misaligned authority.

Handing out titles without enabling execution is like giving someone car keys but never teaching them to drive. That gesture might look like empowerment, but it’s setting them up to fail. The fix isn’t complicated: whenever you assign responsibility, pair it with three things—authority to implement changes, budget to cover them, and a clear service-level expectation on how quickly those changes should happen. In short, design role charters where responsibility equals capability.

There’s also an easy way to check for these gaps before they cause trouble. For every area of responsibility, ask three simple questions out loud: Can this team approve the changes that data highlights? Do they have the budget to act promptly? Do they have the technical access to make the changes? If the answer is “no” to any of those, you’ve identified an accountability dead zone.

When those gaps persist, issues pile up quietly in the background. Compliance alerts keep recurring because the teams that see them can’t intervene. Cost overruns grow because the people responsible for monitoring don’t have the budget flexibility to optimize. Slowly, what could have been routine fixes turn into larger problems that require executive attention. A minor policy misconfiguration drags on for weeks until an audit forces urgent remediation. A cost trend gets ignored until budget reviews flag it as unsustainable. These outcomes don’t happen because teams are negligent—they happen because responsibility was distributed without matching authority.

As that culture takes hold, teams start lowering their expectations. It becomes normal for risks to sit unresolved. It feels routine to surface the same problems in every monthly report. Nobody expects true resolution, just more tracking and logging. That normalization is what traps organizations into cycles of stagnation. Dashboards keep getting updated, reports keep circulating, and yet the environment doesn’t improve in any noticeable way.

The real turning point is alignment. When the same team that identifies an issue also has the authority, budget, and mandate to resolve it, continuous improvement becomes possible. Imagine cost optimization where financial accountability includes both spending authority and technical levers like workload rightsizing. Or compliance ownership where the same group that sees policy gaps can enforce changes directly instead of waiting for months of approvals. In those scenarios, problems don’t linger—they get surfaced and corrected as a single process.

That alignment breaks the repetition cycle. Problems stop recycling through reports and instead move toward closure. And once teams start experiencing that shift, they build the confidence to tackle improvements proactively rather than reactively. The cloud environment stops being defined by recurring frustrations and begins evolving as intended—through steady, continuous refinement.

But alignment alone isn’t the end of the story. Even perfectly structured responsibilities can hit bottlenecks when budgets dry up at crucial moments. Teams may be ready to act, empowered to make changes, and equipped with authority, only to discover the funding to back those changes isn’t there. And when that happens, progress stalls for an entirely different reason.

Budget Constraints: The Silent Saboteur

Even when teams have clear roles, authority, and processes, there’s another force that undercuts progress: the budget. This is the silent saboteur of continuous improvement.

On paper, everything looks ready—staff are trained, dashboards run smoothly, responsibilities line up. Then the funding buffer that’s supposed to sustain the next stage evaporates. In many organizations, this doesn’t come from leadership ignoring value. It comes from how the budget is framed at the start of a cloud project. Migration expenses get scoped, approved, and fixed with clear end dates. Moving servers, lifting applications, retiring data centers—that stack of numbers becomes the financial story. What comes after, the ongoing work where optimization and real savings emerge, is treated as optional. And once it’s forced to compete with day-to-day operational budgets, money rarely makes it to the improvement pile.

That’s where the slowdown begins. Migration is often seen as the heavy lift. The moment workloads are online, leaders expect spending to stabilize or even slide down. But the cloud doesn’t freeze just because the migration phase ends. Costs continue shifting. Optimization isn’t a one-time box to check—it’s a cycle that starts immediately and continues permanently. If budget planning doesn’t acknowledge that reality, teams watch their bills creep upward, while the very tools and processes designed to curb waste are cut first. What looks like efficiency in trimming those line items instead guarantees higher spend over time.

Teams feel this pressure directly. Engineers spot inefficiencies all the time: idle resources running overnight, storage volumes provisioned far beyond what’s needed, virtual machines operating full-time when they’re only required for part of the day. The fixes are straightforward—automation, smarter monitoring, scheduled workload shutdowns—but they require modest investments that suddenly don’t have budget coverage. Leadership expects optimization “later,” in a mythical second phase that rarely gets funded. In the meantime, waste accumulates, and with no capacity to act, skilled engineers become passive observers.

I’ve seen this pattern in organizations that migrated workloads cleanly, retiring data centers and hitting performance targets. The technical success was real—users experienced minimal disruption, systems stayed available. Yet once the initial celebration passed, funding for optimization tools was classified as an unnecessary luxury. With no permanent line item for improvement, costs increased steadily. A year later, the same organization was scrambling with urgent reviews, engaging consultants, and patching gaps under pressure. The technical migration wasn’t the problem; the lack of post-migration funding discipline was.

Ironically, these decisions often come from the pursuit of savings. Leaders believe trimming optimization budgets protects the bottom line, but the opposite happens. The promise of cost efficiency backfires. The environment drifts toward waste, and by the time intervention arrives, remediation is far more expensive. It’s like buying advanced hardware but refusing to pay for updates. The system still runs, but each missed update compounds the limitations. Over time, you fall behind—not because of the hardware itself, but because of the decision to starve it of upkeep.

Cloud expenses also stay less visible than they should. Executives notice when bills spike or when an audit forces a fix, but it’s harder to notice the invisible savings that small, consistent optimizations achieve. Without highlighting those avoided costs, teams lack leverage to justify ongoing budgets. The result is a cycle where leadership waits for visible pain before releasing funds, even though small, steady investments would prevent the pain from showing up at all. Standing still in funding isn’t actually holding steady—it’s falling behind.

The practical lesson here is simple: treat optimization budgets as permanent, not optional. Just as you wouldn’t classify electricity or software licensing as temporary, ongoing improvement needs a recurring financial line item. A workable pattern to propose is this: commit to a recurring cloud optimization budget that is reviewed quarterly, tied to specific goals, and separated from one-time migration costs. This shifts optimization from a “maybe someday” item into a structural expectation.

And within that budget, even small interventions can pay off quickly. Something as simple as automating start and stop schedules for development environments that sit idle outside business hours can yield immediate savings. These aren’t complex projects. They’re proof points that budget directed at optimization translates directly into value. By institutionalizing these types of low-cost actions, teams build credibility that strengthens their case for larger optimizations down the road.

Budget decides whether teams are stuck watching problems grow or empowered to resolve them before they escalate. If improvement is treated as an expense to fight for every year, progress will always lag behind. When it’s treated as a permanent requirement of cloud operations, momentum builds.

And that’s where the conversation shifts from cost models to mindset. Budget thinking is inseparable from posture—because the way you fund cloud operations reflects whether your organization is prepared to react or ready to improve continuously.

The Posture That Creates Continuous Improvement

That brings us to the core idea: the posture that creates continuous improvement. By posture, I don’t mean a new tool, or a reporting dashboard, or a line drawn on an org chart. I mean the stance an organization takes toward ongoing change in the cloud. It’s about how you position the entire operation—leadership, finance, and engineering—to treat cloud evolution as the default, not the exception.

Most environments still run in reactive mode. A cost spike appears, and the reaction is to freeze spending. A compliance gap is discovered during an audit, and remediation is rushed. A performance issue cripples productivity, and operations scrambles with little context. In all these cases, the problem gets handled, but the pattern doesn’t change. The same incidents resurface in different forms, because the underlying stance hasn’t shifted. This is what posture really determines: whether you keep treating problems as interruptions, or redesign the system so change feels expected and manageable.

I worked with one organization that flipped this pattern by changing posture entirely. Their monitoring dashboards weren’t just for leadership reports. Every signal on cost, performance, or security was tied directly to action. Take cost inefficiency—it wasn’t logged for later analysis. Instead, the team had already set aside a recurring pool of funds and scheduled space in the roadmap to address it within one to two weeks. The process wasn’t about waiting for budget approval or forming a new project. It was about delivering rapid, predictable optimizations on a fixed cadence. Security alerts followed the same rhythm: each one triggered a structured remediation path that was already resourced. The difference wasn’t better technology—it was posture, using metrics as triggers for action instead of as static indicators.

So how do you build this kind of posture in practice? There are a few patterns you can adopt right away. Make measurement lead to action—tie each signal to a specific owner and a concrete adjustment. Co-locate budget and authority—make sure the team spotting an issue can also fund and execute its fix. Pre-fund remediation—set aside a small, recurring slice of time and budget to act on issues as soon as they crop up. And plan continuous adoption cycles—treat new cloud services and optimization steps as permanent roadmap items, not optional extras. These aren’t silver bullets, but as habits, they translate visibility into movement instead of noise.

To validate whether your posture is working, focus on process-oriented goals instead of chasing hard numbers. One useful aspiration is to shorten the time between detection and remediation. If it used to take months or quarters to close issues, aim for days or weeks. The metric isn’t about reporting a percentage—it’s about confirming a posture shift. When problems move to resolution quickly, without constant escalations, that’s proof your organization has changed how it operates.

Now, here’s the proactive versus reactive distinction boiled down. A reactive stance assumes stability should be the norm and only prepares to respond when something breaks. A proactive stance assumes the cloud is always shifting. So it deliberately builds recurring time, budget, and accountability to act on that movement. If your organization embraces that mindset, monitoring becomes forward-looking, and reports stop sitting idle because they feed into systems already designed to execute. To make it concrete: today, pick one monitoring signal, assign a team with both budget and authority, and schedule a short optimization sprint within the next two weeks. That’s how posture turns into immediate, visible improvement.

The real strength of posture is that once it changes, the other challenges follow. Data stops piling up in unused reports, because actions are already baked in. Responsibility aligns with authority and budget, closing those accountability dead zones. Ongoing optimization is funded as a given, not something that constantly needs to be re-justified. One change in stance helps all the other moving parts line up.

And the shift redefines how teams experience cloud operations. Instead of defense and damage control, they lean into cycles of improvement. Instead of being cornered by audits or budget crises, they meet them with plans already in place. Over time, that steadiness builds confidence—confidence to explore new cloud services, experiment with capabilities, and lead change rather than react to it. What started as a migration project evolves into a discipline that generates lasting value for the business.

The point is simple: posture is the leverage point. When you design for change as permanent, everything else begins to align. And that’s what turns cloud from a source of recurring frustration into an engine that builds agility and savings over time.

Conclusion

The real shift comes from treating posture as your framework for everything that follows. Think of it as three essentials: make measurement lead to action, align budget with authority, and turn monitoring into change that actually happens. If those three habits guide your cloud operations, you move past reporting problems and start closing them.

So here’s the challenge—don’t just collect dashboards. Pick one signal, assign a team with the power and budget to act, and close the loop this month.

I’d love to hear from you: what’s the one monitoring alert you wish always triggered action in your org? Drop it in the comments. And if this helped sharpen how you think about cloud operations, give it a like and subscribe for more guidance like this.

Adopt a posture that treats change as permanent, and continuous improvement as funded, expected work. That simple shift is how momentum starts.

Discussion about this episode

User's avatar