M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Build Azure Apps WITHOUT Writing Boilerplate
0:00
-18:55

Build Azure Apps WITHOUT Writing Boilerplate

How many hours have you lost wrestling with boilerplate code just to get an Azure app running? Most developers can point to days spent setting up configs, wiring authentication, or fighting with deployment scripts before writing a single useful line of code.

Now, imagine starting with a prompt instead. In this session, I’ll show a short demo where we use GitHub Copilot for Azure to scaffold infrastructure, run a deployment with the Azure Developer CLI, and even fix a runtime error—all live, so you can see exactly how the flow works.

Because if setup alone eats most of your time, there’s a bigger problem worth talking about.

Why Boilerplate Holds Teams Back

Think about the last time you kicked off a new project. The excitement’s there—you’ve got an idea worth testing, you open a fresh repo, and you’re ready to write code that matters. Instead, the day slips away configuring pipelines, naming resources, and fixing some cryptic YAML error. By the time you shut your laptop, you don’t have a working feature—you have a folder structure and a deployment file. It’s not nothing, but it doesn’t feel like progress either.

In many projects, a surprisingly large portion of that early effort goes into repetitive setup work. You’re filling in connection strings, creating service principals, deciding on arbitrary resource names, copying secrets from one place to another, or hunting down which flag controls authentication. None of it is technically impressive. It’s repeatable scaffolding we’ve all done before, and yet it eats up cycles every time because the details shift just enough to demand attention. One project asks for DNS, another for networking, the next for managed identity. The variations keep engineers stuck in setup mode longer than they expected.

What makes this drag heavy isn’t just the mechanics—it’s the effect it has on teams. When the first demo rolls around and there’s no visible feature to show, leaders start asking hard questions, and developers feel the pressure of spending “real” effort on things nobody outside engineering will notice. Teams often report that these early sprints feel like treading water, with momentum stalling before it really begins. In a startup, that can mean chasing down a misconfigured firewall instead of iterating on the product’s value. In larger teams, it shows up as week-long delays before even a basic “Hello World” can be deployed. The cost isn’t just lost time—it’s morale and missed opportunity.

Here’s the good news: these barriers are exactly the kinds of steps that can be automated away. And that’s where new tools start to reshape the equation. Instead of treating boilerplate as unavoidable, what if the configuration, resource wiring, and secrets management could be scaffolded for you, leaving more space for real innovation? Here’s how Copilot and azd attack exactly those setup steps—so you don’t repeat the same manual work every time.

Copilot as Your Cloud Pair Programmer

That’s where GitHub Copilot for Azure comes in—a kind of “cloud pair programmer” sitting alongside you in VS Code. Instead of searching for boilerplate templates or piecing together snippets from old repos, you describe what you want in natural language, and Copilot suggests the scaffolding to get you started. The first time you see it, it feels less like autocomplete and more like a shift in how infrastructure gets shaped from the ground up.

Here’s what that means. Copilot for Azure isn’t just surfacing random snippets—it’s generating infrastructure-as-code artifacts, often in Bicep or ARM format, that match common Azure deployment patterns. Think of it as a starting point you can iterate on, not a finished production blueprint. For example, say you type: “create a Python web app using Azure Functions with a SQL backend.” In seconds, files appear in your project that define a Function App, create the hosting plan, provision a SQL Database with firewall rules, and insert connection strings. That scaffolding might normally take hours or days for someone to build manually, but here it shows up almost instantly.

This is the moment where the script should pause for a live demo. Show the screen in VS Code as you type in that prompt. Let Copilot generate the resources, and then reveal the resulting file list—FunctionApp.bicep, sqlDatabase.bicep, maybe a parameters.json. Open one of them and point out a key section, like how the Function App references the database connection string. Briefly explain why that wiring matters—because it’s the difference between a project that’s deployable and a project that’s just “half-built.” Showing the audience these files on screen anchors the claim and lets them judge for themselves how useful the output really is.

Now, it’s important to frame this carefully. Copilot is not “understanding” your project the way a human architect would. What it’s doing is using AI models trained on a mix of open code and Azure-specific grounding so it can map your natural language request to familiar patterns. When you ask for a web app with a SQL backend, the system recognizes the elements typically needed—App Service or Function App, a SQL Database, secure connection strings, firewall configs—and stitches them together into templates. There’s no mystery, just a lot of trained pattern recognition that speeds up the scaffolding process.

Developers might assume that AI output is always half-correct and a pain to clean up. And with generic code suggestions, that often rings true. But here you’re starting from infrastructure definitions that are aligned with how Azure resources are actually expected to fit together. Do you need to review them? Absolutely. You’ll almost always adjust naming conventions, check security configurations, and make sure they comply with your org’s standards. Copilot speeds up scaffolding—it doesn’t remove the responsibility of production-readiness. Think of it as knocking down the blank-page barrier, not signing off your final IaC.

This also changes team dynamics. Instead of junior developers spending their first sprint wrestling with YAML errors or scouring docs for the right resource ID format, they can begin reviewing generated templates and focusing energy on what matters. Senior engineers, meanwhile, shift from writing boilerplate to reviewing structure and hardening configurations. The net effect is fewer hours wasted on rote setup, more attention given to design and application logic. For teams under pressure to show something running by the next stakeholder demo, that difference is critical.

Behind the scenes, Microsoft designed this Azure integration intentionally for enterprise scenarios. It ties into actual Azure resource models and the way the SDKs expect configurations to be defined. When resources appear linked correctly—Key Vault storing secrets, a Function App referencing them, a database wired securely—it’s because Copilot pulls on those structured expectations rather than improvising. That grounding is why people call it a pair programmer for the cloud: not perfect, but definitely producing assets you can move forward with.

The bottom line? Copilot for Azure gives you scaffolding that’s fast, context-aware, and aligned with real-world patterns. You’ll still want to adjust outputs and validate them—no one should skip that—but you’re several steps ahead of where you’d be starting from scratch.

So now you’ve got these generated infrastructure files sitting in your repo, looking like they’re ready to power something real. But that leads to the next question: once the scaffolding exists, how do you actually get it running in Azure without spending another day wrestling with commands and manual setup?

From Scaffolding to Deployment with AZD

This is where the Azure Developer CLI, or azd, steps in. Think of it less as just another command-line utility and more as a consistent workflow that bridges your repo and the cloud. Instead of chaining ten commands together or copying values back and forth, azd gives you a single flow for creating an environment, provisioning resources, and deploying your application. It doesn’t remove every decision, but it makes the essential path something predictable—and repeatable—so you’re not reinventing it every project.

One key clarification: azd doesn’t magically “understand” your app structure out of the box. It works with configuration files in your repo or prompts you for details when they’re missing. That means your project layout and azd’s environment files work together to shape what gets deployed. In practice, this design keeps it transparent—you can always open the config to see exactly what’s being provisioned, rather than trusting something hidden behind an AI suggestion.

Let’s compare the before and after. Traditionally you’d push infrastructure templates, wait, then spend half the afternoon in the Azure Portal fixing what didn’t connect correctly. Each missing connection string or misconfigured role sent you bouncing between documentation, CLI commands, and long resource JSON files. With azd, the workflow is tighter:

- Provision resources as a group.
- Wire up secrets and environment variables automatically.
- Deploy your app code directly against that environment.

That cuts most of the overhead out of the loop. Instead of spending your energy on plumbing, you’re watching the app take shape in cloud resources with less handholding.

This is a perfect spot to show the tool in action. On-screen in your terminal, run through a short session:

azd init.
azd provision.
azd deploy.

Narrate as you go—first command sets up the environment, second provisions the resources, third deploys both infrastructure and app code together. Let the audience see the progress output and the final “App deployed successfully” message appear, so they can judge exactly what azd does instead of taking it on faith. That moment validates the workflow and gives them something concrete to try on their own.

The difference is immediate for small teams. A startup trying to secure funding can stand up a working demo in a day instead of telling investors it’ll be ready “next week.” Larger teams see the value in onboarding too. When a new developer joins, the instructions aren’t “here’s three pages of setup steps”—it’s “clone the repo, run azd, and start coding.” That predictability lowers the barrier both for individuals and for teams with shifting contributors.

Of course, there are still times you’ll adjust what azd provisioned. Maybe your org has naming rules, maybe you need custom networking. That’s expected. But the scaffolding and first deployment are no longer blockers—they’re the baseline you refine instead of hurdles you fight through every time. In that sense, azd speeds up getting to the “real” engineering work without skipping the required steps.

The experience of seeing your application live so quickly changes how projects feel. Instead of calculating buffer time just to prepare a demo environment, you can focus on what your app actually does. The combination of Copilot scaffolding code and azd deploying it through a clean workflow removes the heavy ceremony from getting started.

But deployment is only half the story. Once your app is live in the cloud, the challenges shift. Something will eventually break, whether it’s a timeout, a missing secret, or misaligned scaling rules. The real test isn’t just spinning up an environment—it’s how quickly you can understand and fix issues when they surface. That’s where the next set of tools comes into play.

AI-Powered Debugging and Intelligent Diagnostics

When your app is finally running in Azure, the real test begins—something unexpected breaks. AI-powered debugging and intelligent diagnostics are designed to help in those exact moments. Cloud-native troubleshooting isn’t like fixing a bug on your laptop. Instead of one runtime under your control, the problem could sit anywhere across distributed services—an API call here, a database request there, a firewall blocking traffic in between. The result is often a jumble of error messages that feel unhelpful without context, leaving developers staring at logs and trying to piece together a bigger picture.

The challenge is less about finding “the” error and more about tracing how small misconfigurations ripple across services. One weak link, like a mismatched authentication token or a missing environment variable, can appear as a vague timeout or a generic connection failure. Traditionally, you’d field these issues by combing through Application Insights and Azure Monitor, then manually cross-referencing traces to form a hypothesis—time-consuming, often frustrating work.

This is where AI can assist by narrowing the search space. Copilot doesn’t magically solve problems, but it can interpret logs and suggest plausible diagnostic next steps. Because it uses the context of code and error messages in your editor, it surfaces guidance that feels closer to what you might try anyway—just faster. To make this meaningful, let’s walk through an example live.

Here’s the scenario: your app just failed with a database connection error. On screen, we’ll show the error snippet: “SQL connection failed. Client unable to establish connection.” Normally you’d start hunting through firewall rules, checking connection strings, or questioning whether the database even deployed properly. Instead, in VS Code, highlight the log, call up Copilot, and type a prompt: “Why is this error happening when connecting to my Azure SQL Database?” Within moments, Copilot suggests that the failure may be due to firewall rules not allowing traffic from the hosting environment, and also highlights that the connection string in configuration might not be using the correct authentication type. Alongside that, it proposes a corrected connection string example.

Now, apply that change in your configuration file. Walk the audience through replacing the placeholder string with the new suggestion. Reinforce the safe practice here: “Copilot’s answer looks correct, but before we assume it’s fixed, we’ll test this in staging. You should always validate suggestions in a non-production environment before rolling them out widely.” Then redeploy or restart the app in staging to check if the connection holds. This on-screen flow shows the AI providing value—not by replacing engineering judgment, but by giving you a concrete lead within minutes instead of hours of log hunting.

Paired with telemetry from Application Insights or Azure Monitor, this process gets even more useful. Those services already surface traces, metrics, and failure signals, but it’s easy to drown in the detail. By copying a snippet of trace data into a Copilot prompt, you can anchor the AI’s suggestions around your actual telemetry. Instead of scrolling through dozens of graphs, you get an interpretation: “These failures occur when requests exceed the database’s DTU allocation; check whether auto-scaling rules match expected traffic.” That doesn’t replace the observability platform—it frames the data into an investigative next step you can act on.

The bigger win is in how it reframes the rhythm of debugging. Instead of losing a full afternoon parsing repetitive logs, you cycle faster between cause and hypothesis. You’re still doing the work, but with stronger directional guidance. That difference can pull a developer out of the frustration loop and restore momentum. Teams often underestimate the morale cost of debugging sessions that feel endless. With AI involved, blockers don’t linger nearly as long, and engineers spend more of their energy on meaningful problem solving.

And when developers free up that energy, it shifts where the attention goes. Less time spelunking in log files means more time improving database models, refining APIs, or making user flows smoother. That’s work with visible impact, not invisible firefighting. AI-powered diagnostics won’t eliminate debugging, but they shrink its footprint. Problems still surface, no question, but they stop dominating project schedules the way they often do now.

The takeaway is straightforward: Copilot’s debugging support creates faster hypothesis generation, shorter downtime, and fewer hours lost to repetitive troubleshooting. It’s not a guarantee the first suggestion will always be right, but it gives you clarity sooner, which matters when projects are pressed for time. With setup, deployment, and diagnostics all seeing efficiency gains, the natural question becomes: what happens when these cumulative improvements start to reshape the pace at which teams can actually deliver?

The Business Payoff: From Slow Starts to Fast Launches

The business payoff comes into focus when you look at how these tools compress the early friction of a project. Teams frequently report that when they pair AI-driven scaffolding with azd-powered deployments, they see faster initial launches and earlier stakeholder demos. The real value isn’t just about moving quickly—it’s about showing progress at the stage when momentum matters most.

Setup tasks have a way of consuming timelines no matter how strong the idea or team is. Greenfield efforts, modernization projects, or even pilot apps often run into the same blocker: configuring environments, reconciling dependencies, and fixing pipeline errors that only emerge after hours of trial and error. While engineers worry about provisioning and authentication, leadership sees stalled velocity. The absence of visible features doesn’t just frustrate developers—it delays when business value is delivered. That lag creates risk, because stakeholders measure outcomes in terms of what can be demonstrated, not in terms of background technical prep.

This contrast becomes clear when you think about it in practical terms. Team A spends their sprint untangling configs and environment setup. Team B, using scaffolded infrastructure plus azd to deploy, puts an early demo in front of leadership. Stakeholders don’t need to know the details—they see one team producing forward motion and another explaining delays. The upside to shipping something earlier is obvious: feedback comes sooner, learning happens earlier, and developers are less likely to sit blocked waiting on plumbing to resolve before building features.

That advantage stacks over time. By removing setup as a recurring obstacle, projects shift their center of gravity toward building value instead of fighting scaffolding. More of the team’s focus lands on the product—tightening user flows, improving APIs, or experimenting with features—rather than copying YAML or checking secrets into the right vault. When early milestones show concrete progress, leadership’s questions shift from “when will something run?” to “what can we add next?” That change in tone boosts morale as much as it accelerates delivery.

It also transforms how teams work together. Without constant bottlenecks at setup, collaboration feels smoother. Developers can work in parallel because the environment is provisioned faster and more consistently. You don’t see as much time lost to blocked tasks or handoffs just to diagnose why a pipeline broke. Velocity often increases not by heroes working extra hours, but by fewer people waiting around. In this way, tooling isn’t simply removing hours from the schedule—it’s flattening the bumps that keep a group from hitting stride together.

Another benefit is durability. Because the workflows generated by Copilot and azd tie into source control and DevOps pipelines, the project doesn’t rest on brittle, one-off scripts. Instead, deployments become reproducible. Every environment is created in a consistent way, configuration lives in versioned files, and new developers can join without deciphering arcane tribal knowledge. Cleaner pipelines and repeatable deployments reduce long-term maintenance overhead as well as startup pain. That reliability is part of the business case—it keeps velocity predictable instead of dependent on a few specialists.

It’s important to frame this realistically. These tools don’t eliminate all complexity, and they won’t guarantee equal results for every team. But even when you account for adjustments—like modifying resource names, tightening security, or handling custom networking—the early blockers that typically delay progress are drastically softened. Some teams have shared that this shift lets them move into meaningful iteration cycles sooner. In our experience, the combination of prompt-driven scaffolding and streamlined deployment changes the pacing of early sprints enough to matter at the business level.

If you’re wondering how to put this into action right away, there are three simple steps you could try on your own projects. First, prompt Copilot to generate a starter infrastructure file for an Azure service you already know you need. Second, use azd to run a single environment deploy of that scaffold—just enough to see how the flow works in your repo. Third, when something does break, practice pairing your telemetry output with a Copilot prompt to test how the suggestions guide you toward a fix. These aren’t abstract tips; they’re tactical ways to see the workflow for yourself.

What stands out is that the payoff isn’t narrowly technical. It’s about unlocking a faster business rhythm—showing stakeholders progress earlier, gathering feedback sooner, and cutting down on developer idle time spent in setup limbo. Even small improvements here compound over the course of a project. The net result is not just projects that launch faster, but projects that grow more confidently because iteration starts earlier.

And at this stage, the question isn’t whether scaffolding, deploying, and debugging can be streamlined. You’ve just seen how that works in practice. The next step is recognizing what that unlocks: shifting focus away from overhead and into building the product itself. That’s where the real story closes.

Conclusion

At this point, let’s wrap with the key takeaway. The real value here isn’t about writing code faster—it’s about clearing away the drag that slows projects long before features appear. When boilerplate gets handled, progress moves into delivering something visible much sooner.

Here’s the practical next step: don’t start your next Azure project from a blank config. Start it with a prompt, scaffold a small sample, then run azd in a non-production environment to see the workflow end to end. Prompt → scaffold → deploy → debug. That’s the flow.

If you try it, share one surprising thing Copilot generated for you in the comments—I’d love to hear what shows up. And if this walkthrough was useful, subscribe for more hands-on demos of real-world Azure workflows.

Discussion about this episode

User's avatar