M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Power BI Collaboration: Herding Cats or GitHub Fix?
0:00
-19:22

Power BI Collaboration: Herding Cats or GitHub Fix?

Here’s my challenge to you: can your BI team trace every change in reports from dev to production, with approvals logged and automation carrying the load?

Quick checkpoint before we dive in—this session assumes you already know PBIP basics and Git terms like branch, commit, and pull request.

Here’s the roadmap: we’ll cover GitHub PR approvals, automated checks with Actions, and deployment pipelines for Power BI. These three make the difference between hoping things don’t break and actually knowing they won’t.

But first, let’s be real—PBIP isn’t the magic cure you might think it is.

Why PBIP Isn’t the Miracle Cure

The shiny new reality with Power BI Desktop Projects (.pbip) is that everything looks cleaner the moment you flip over. Instead of stuffing an entire report, model, and connections into one bulky PBIX “black box,” PBIP lays it all out as a structured folder full of text files. The semantic model gets its own model.bim file, which isn’t just readable—it also plugs straight into tools like Tabular Editor. Connections, visuals, and JSON metadata live in separate files. Suddenly Git actually works here: diffs show you exactly what changed, branches let multiple people experiment without tripping over each other, and you unlock compatibility with CI/CD tooling like GitHub Actions or Azure DevOps.

That’s the good part—the technical unlock. The bad part is that PBIP doesn’t magically fix team dynamics. All it really does is shine a flashlight on the chaos you already had. With PBIX, every edit lived silently in a single binary file that no one could properly track. With PBIP, those same edits are now scattered across dozens of little files shouting for attention in Git. Yes, merge conflicts are visible. Yes, you can finally see which measure got changed. But if five people hammer away at the same dataset on Monday morning, Git still lights up red. The difference is that now you get to argue about it file by file instead of pretending the issue doesn’t exist.

Think of it like swapping your junk drawer for a labeled tool chest. Sure, every screwdriver now has a neat spot. But when half the office reaches for the Phillips head at once, friction doesn’t disappear—it just becomes a little easier to see who grabbed what. That’s what PBIP brings: clarity without discipline.

I’ve worked with teams who went all-in on PBIP expecting it to solve clashing edits. They dropped rules, skipped reviews, and trusted visibility to save them. Within a few sprints the result was an audit trail full of unexplained changes: columns renamed without reason, relationships adjusted without warning, measures rewritten with zero context. Seeing the edits didn’t make them safer—it just made the confusion permanent in Git history.

There’s also the matter of merge conflicts. PBIP makes them loud and clear, sometimes painfully so. Instead of a silent corruption buried in a PBIX, you’re staring at bright red conflict markers in VS Code showing three competing versions of the same DAX measure. Technically, that’s progress. At least you know exactly what broke. But being “louder” doesn’t mean the pain goes away—it just means you can no longer ignore it. Whether that’s a blessing or curse depends on how disciplined your team is about workflow.

And here’s the crux: the real breakthrough isn’t PBIP itself, it’s what PBIP enables. By splitting out reports and models into text-based, version-controllable files, you can finally layer proper Git workflows on top—branches, pull requests, reviews, CI/CD pipelines. PBIP is the door, not the destination. It gives you per-component version control, but it doesn’t tell the people holding the steering wheel how to drive.

That means the same problems remain if you don’t wrap structure and automation around this setup. Without approvals and reviews, you’re just staring at more visible paw prints from the same herd of cats. Without guardrails like automated checks, you’re still trusting developers not to slip questionable edits straight into production. PBIP makes modern DevOps practices possible in Power BI, but it doesn’t enforce them. That’s still on the team.

So where does that leave us? With clarity but not order. PBIP strips away the monolithic fog of PBIX, but it doesn’t prevent messy collisions. For that, you need controls—something to stop people from overriding each other or sneaking untested changes through. That’s why the next step matters so much. GitHub Pull Requests take the messy stack of PBIP files and put traffic lights on top of them. Instead of five developers racing into the same intersection, you get checks, signals, and a log of who actually did what.

And that’s where the story shifts. Because as useful as PRs are, anyone who’s lived through them knows they can feel like both a lifeline and a choke point. So the obvious question before we dive in is: how do you keep the order they bring without turning every tiny color change into a committee meeting?

PR Approvals: From Chaos to Controlled Mischief

Now let’s talk about the real checkpoint that turns PBIP from organized chaos into something manageable: Pull Request approvals.

Picture five developers all tweaking the same report. One edits DAX, another renames a column, one changes formatting, and two more mess around with colors. If those edits all land unchecked, you don’t get collaboration—you get a Franken-report. That’s why PRs exist. They’re the gatekeeper that slows things just enough to prevent the equivalent of five people merging into the same intersection at once.

Think of PRs as GitHub traffic lights. A green light means merge and move forward. A red light holds you back until reviewers confirm your work won’t smash into something else. Nothing fancy, just structured pauses so you aren’t tearing down each other’s changes by accident.

And yes, approvals do make people nervous. “Won’t this slow us down?” Only if you misuse them. You don’t need senior managers rubber-stamping a color change to a pie chart. But you absolutely want more than one set of eyes when someone restructures a dataset schema or rewires relationships in a model. So the practical approach is this: map review strictness to impact. A single quick approval is fine for cosmetic or text tweaks. Multi-reviewer gates are required when altering measures, adding calculated columns, or adjusting relationships. That way, day-to-day work stays quick, but the high-stakes stuff doesn’t sneak into production unchecked.

This isn’t just me being cautious. Microsoft’s own branch and workspace guidance treats PR approvals as a control point. They’re the moment when you officially decide a change becomes part of the shared version. Without this step, you’re basically letting people hot-patch whatever they like into the core model. That’s not governance—that’s an incident waiting to happen.

And here’s a tactical win people forget: PRs automatically produce an audit trail. Every commit, comment, and approval gets logged. So the next time someone asks why the revenue report broke last week, you don’t have to sift through local files like a detective. You just check the PR history—who approved it, what was discussed, and which changes were included. That trail is worth its weight in overtime hours saved.

Bottom line: PR approvals aren’t about bureaucracy. They’re about balance. Too loose and you invite chaos. Too strict and you’ll stall over trivial edits. The sweet spot is rules that scale—light reviews for low-impact edits, gated approvals for heavy refactors. That keeps the repo secure without burning everyone out.

That said, PRs only catch what reviewers actually pay attention to. Humans get tired, skim over changes, or hit “approve” just to clear the queue. Which means a lot of preventable issues can still slip through. If you’ve ever merged sloppy DAX or a broken relationship because no one caught it during review, you know the pain.

This is where the next shift happens: automating the obvious. Instead of relying strictly on human reviewers to notice formatting gaps, weak naming, or missing descriptions, you let automation scan every PR before approval even hits the table. Humans focus on context. Machines pick up the predictable mistakes.

So the next logical step is simple. PRs bring order. Automation makes that order smarter. And that combination is what keeps your main branch from turning into a landfill of small but costly errors.

Automated Checks: Your Silent Review Team

So here’s the next piece: automated checks, your silent review team. These are the guard dogs that don’t sleep, don’t skim pull requests, and aren’t distracted by Slack. They sit at the repo door and bark whenever something sloppy tries to sneak through.

The mechanic behind it is simple enough: GitHub Actions. Every time you push a commit or open a pull request, Actions spin up and run the scripts you tell them to. Think of them as bouncers with clipboards. New files show up, and instead of waving them into production by default, Actions run a battery of checks—scripts in PowerShell, validation tools, maybe a couple of Node tasks—and only then give a green light. You don’t hit play yourself; they fire automatically on events you define.

Now, the catch: you don’t start by throwing every possible validator into the mix. Do that, and you’ll turn your repo into airport security. Developers will spend more time unzipping their shoes than writing code. The smarter move is to start small. Pick the checks that return the biggest value for the least friction—naming conventions, glaring DAX anti-patterns, obvious schema slips like missing relationships, and maybe some linting on JSON or model files. That way, your developers still feel fast, but you’ve bought yourself some safety rails.

Let’s put some tools on the table. Model.bim is a text file, which means static analysis tools can crawl through it. Tabular Editor can run command-line scripts to validate relationships, naming rules, or calculation groups. PowerShell scripts from the MicrosoftPowerBIMgmt module can query datasets or validate if a workspace is actually alive before you dump changes on it. Combine that with the Power BI REST APIs, and you can even automate smoke tests for updated reports. A validation script can hit those APIs, check metadata or data source bindings, and kick back errors. None of this is hypothetical—it’s what makes PBIP worth adopting in the first place.

A great pattern teams use: set up a GitHub Action on pull request “open” or “update.” That workflow runs a small suite—maybe a PowerShell script to test dataset names, a Node script to catch bad DAX snippets, and a Tabular Editor command-line run to ensure the model doesn’t break basic best practices. If something fails, the Action pushes comments right into the pull request conversation and marks the build red. No merge, no click-through. The developer fixes it before the branch even gets considered.

Here’s why that matters: text diffs don’t warn you about performance problems. You might see a renamed measure or column and think, “Looks fine.” But a validator can catch the fact that someone deleted an index, or changed a filter in a way that turns queries into sludge. One team I worked with had a case like this—a dataset rebuild looked innocent in Git, but a validator flagged a broken relationship before it hit production. That single automated fail saved them hours of firefighting and a couple of angry calls from business users.

Think of automation as the grunt-work filter. Humans hate checking casing rules or scanning for “Sheet1” table names. A script can handle it in milliseconds. The humans still weigh in—but their reviews go to strategic fit, business logic, and design quality instead of whether you capitalized “CustomerID” consistently. Division of labor, pure and simple.

Microsoft’s docs basically nudge you in this direction. GitHub Actions integrate with Power BI services and even Fabric’s APIs, making validation part of your CI/CD story. That’s not overkill; it’s the obvious way forward. Automate repetitive hygiene so developers and reviewers aren’t wasting energy on boring consistency checks.

Bottom line: treat Actions like your bot coworkers. Start with a handful of checks that deliver the most value, then expand as your team matures. Automate model-level sanity checks, schema validation, and naming rules. Leave reviewers free to spend time on design and strategy, not proofreading field names.

And once those checks are in place, the question naturally becomes: if Actions can block bad changes from entering, why not also let them carry good changes forward? That’s where pipelines come in—not just blockades, but automated ways to push approved builds from development through testing and into production without the ritual of manual button-clicks. And that shift changes deployments from superstition to something you can actually trust.

Deployment Pipelines: Reliability Without Prayer

Deployment pipelines are where the whole setup graduates from “nice experiment” to something you can actually run in production without chewing your fingernails down. This is the part that takes your carefully checked code and moves it between environments in an orderly way—DEV, TEST, and finally PROD—without relying on lucky mouse clicks.

Manual publishing in Power BI is the Wild West. Export a PBIX, import it somewhere else, pray you didn’t pick the wrong workspace, and cross your fingers that dataset IDs still line up. It’s like carrying an armload of groceries without a bag. Maybe you make it to the car, maybe the eggs redecorate your driveway. That’s why deployment pipelines exist, and more specifically, the Deployment Pipelines REST APIs. They were built to remove that constant element of chance.

These APIs aren’t just “next stage” buttons. They support several key scenarios that every BI lead should know: *Deploy All* sends an entire workspace forward to the next stage, *Selective Deploy* lets you pick individual items like a report or dataset, and *Backward Deploy* (with `isBackwardDeployment=true`) can promote content back into a previous stage if it’s missing there. On top of that, APIs can also *Update the App* associated with a stage, so when you promote content into TEST or PROD, the linked app can update automatically and users see the refreshed reports right away. That’s real control—without shuffling files around like a desperate intern.

Now, here’s the blunt truth: before any of this works, you need the boring but crucial prerequisites. The caller—whether a human account or a service principal—has to be in Microsoft Entra ID with the right permissions on both the pipeline and the target workspaces. Skip that, and nothing moves. And when you lean on service principals—which you absolutely should for automation—you hit some limits. Service principals cannot configure OAuth for data sources, so certain datasets won’t refresh until a human fixes credentials. Also, a service principal becomes the owner of semantic models and paginated reports it deploys, which can block scheduled refreshes. And forget about dataflows: service principals can’t deploy those at all. One more catch—each deployment call caps at 300 items. If your workspace has exploded into a junkyard of visuals, that ceiling will come for you. These are not “nice to knows”; they’re the potholes you need to acknowledge up front.

So, how do you wire this into GitHub Actions? The smart, battle-tested method is to stash your Azure service principal credentials—client ID and secret—in GitHub repository secrets. That keeps them out of the code and away from accidental commits. Then you map repo folders to pipeline IDs with a YAML configuration file. Each folder aligns to a specific pipeline, and the GitHub Action reads that map to push content to the right place. This isn’t me making something up—this pattern shows up in community actions like *Power BI Pipeline Deploy* on the GitHub Marketplace. It works well, but you should know these marketplace actions are third-party, not “Microsoft certified.” Translation: they’re useful, but you own the risk. If you need maximum security and support, roll your own scripts against the APIs.

Okay, so what does this look like on the ground? You close a pull request, GitHub Actions trigger, call the Deployment Pipelines REST API, and move your content from DEV to TEST. If all your checks pass, the same workflow can promote on to PROD. The cycle is repeatable and auditable: you know exactly what went where, when, and by whose approval. That replaces improvised nightly hope sessions with structure. And if your boss asks “How do we know nothing slipped in untested?” you actually have an answer beyond “trust me.”

Does this mean automation is perfect? No. You’ll still run into cases where manual credential fixes are unavoidable, or where the limits on items and dataflows force you to rethink your structure. But the difference is you’re dealing with occasional, known exceptions, not perpetual chaos. Nine times out of ten, the deploy runs lights-out, and you stop burning weekends chasing missing relationships in PROD.

Bottom line: deployment pipelines plus Actions give you reliability without prayer. Instead of clicking and whispering to the reporting gods, you’ve got a process that carries reports through environments with consistency. You can audit trail it, roll back if needed, and sleep at night knowing nobody fat-fingered a workspace name.

And once you’ve got deployments running this smoothly, the real challenge shifts. It’s no longer “Can we promote safely?” but “How do we layer governance in without turning the whole setup into red tape?” That balance—control without bureaucracy—is what we’ll tackle next.

Governance Without Bureaucracy

Nobody signed up for CI/CD just to feel like they were renewing a car license at the DMV. The goal is speed with safety, not drowning in process charts and paperwork approvals. Traditional governance often forgets that. It tries to improve quality by smothering developers in forms and tickets, and the end result isn’t safer code—it’s frustrated devs sprinting around the guardrails. Governance that actually works is governance that feels like guardrails on a highway: they don’t slow you down, they just make sure your car doesn’t leave the road at 80 miles an hour.

And the practical difference comes from how you set those guardrails. Not every repo needs the same rules. That’s the first tactical fix. For cosmetic changes in a report, you don’t need a council of elders—GitHub can be set to a simple “one thumbs-up and merge it.” But for high‑impact model updates that recalc half your business logic? Tighten the gates with multiple reviewers and make them answer to a PR template that lists exactly what to check. Lightweight where it can be, strict where it must be. Simple math.

This is where PR templates actually earn their keep. A good template calls out business logic, schema dependencies, testing notes—so the reviewer doesn’t have to guess what’s important. Layer that with automated checks chasing the boring stuff, and suddenly your humans are elevated back to human work. Bots chase casing rules, you check if the sales metric still matches finance’s definition. It’s division of labor that keeps pace high without sacrificing quality.

Let’s add automation back into the picture because it’s the linchpin. Your automated checks—whether they’re PowerShell scripts using MicrosoftPowerBIMgmt, Tabular Editor validations, or targeted JSON linting—snag the things no human reviewer should care about. Consistent naming, obvious DAX errors, missing relationships: all flagged before the pull request even gets a green button. That means reviewers’ eyes go exactly where they should—on intent, design, and whether the business will actually trust that report.

And this isn’t just theory. Teams that live in chaos without governance often rediscover sanity by doing exactly this: PR templates, automated validators, and approval levels tuned to the repo. The shift is immediate. Work feels lighter because devs know they won’t get blocked for changing a title, but they can’t bypass scrutiny if they’re reshaping the dataset that drives executive dashboards. Quality stays high, but nobody feels like they’re filling out tax forms to push code.

Now, there’s also the bigger picture: how environments map to branches. Most teams land in one of two supported patterns. Option one: developers create feature workspaces that map directly to feature branches, so each experimental branch has a mirrored sandbox in Power BI Service. Option two: developers make their changes locally in PBIP and sync them to a feature branch without a parallel workspace, letting automation handle the deployment pipeline into shared environments. Both approaches line up with Microsoft’s Fabric CI/CD guidance. The trick is deciding which style matches your team’s appetite for overhead. Either way, it’s still branching plus workspaces; you’re just choosing whether to mirror both.

With those patterns in mind, governance gets even easier to scale. Lightweight approvals for content tweaks in a feature workspace. Heavier reviews and scripted checks for model changes that push into TEST or PROD. The structure is clear: the farther your change goes, the more gatekeeping it deserves. Nobody fights the rules when they actually make sense.

And don’t forget the cultural side. When developers see that bots handle the grunt checks and humans only weigh in where their judgment matters, they stop viewing governance as an obstacle. They respect it, because it filters out noise and keeps them from firefighting in production. Automation does the nagging; humans focus on strategy. That’s what keeps the balance working.

The outcome is a development culture where speed and safety finally align. Every change is traceable, every step through the pipeline is logged, and approvals feel streamlined instead of bureaucratic. Teams move fast and still sleep at night because nothing sneaks into production without a breadcrumb trail.

And while all of this drops the stress of endless approvals, it also shines a light on where the real danger lurks—not in red tape, but in the untraceable changes that slip past everyone and explode later.

Conclusion

So let’s land this with what really matters. Three takeaways you can act on right now: PBIP gives you clean, component-level diffs but it doesn’t solve workflow on its own. Pair PR approvals with automated checks so the obvious mistakes never make it past review. Then wire merges into deployment pipelines using REST APIs or GitHub Actions—just keep in mind the quirks with service principals and item limits.

Do those three things and you’ve got change management on cruise control for Power BI.Subscribe to the podcast and drop a review—I spend hours each day making this for you, and your support would help me a lot.

Discussion about this episode

User's avatar