M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Extending Microsoft Fabric with Custom APIs and Power BI Models
0:00
-22:49

Extending Microsoft Fabric with Custom APIs and Power BI Models

If you’ve ever hit a wall with Microsoft Fabric’s UI, wondering how everyone else is breaking through data silos and building seamless analytics—even when the out-of-the-box options fall short—you’re in the right place. Today, we’ll show you exactly where Fabric’s UI holds you back, and how custom APIs and Power BI models unlock a whole new set of solutions.

Where the UI Stops: Recognizing Fabric’s Real-World Roadblocks

It’s easy to scroll through Microsoft Fabric’s sleek UI and convince yourself you’ve landed in analytics paradise. There are connectors everywhere, visualization options line up like menu items, and those first dashboards come together almost too smoothly. Then reality shows up—usually in the form of one convoluted project. Imagine this: you’re part of a team that needs to blend sales numbers from Dynamics with supplier data locked away in an aging ERP. Everybody’s excited, plans get drafted, and then someone asks, “So how do we bring in these custom fields from the ERP? And can we automate refreshes, including those weird exceptions marketing cares about?” That’s when people start reaching for their coffee, because suddenly, the UI isn’t as endless as it seemed.

It hits harder with automation. Maybe you’re leading a product team and want daily insights—automatically—across inventory, support tickets, and ongoing campaigns. Sounds simple enough, but as soon as a custom API or an on-prem system enters the story, the UI can’t stomach the integration. Users find themselves jumping through hoops, setting up manual data dumps, or—my personal favorite—copying everything into Excel for the “real” calculations. The UI lets you get so far, and then you hit a wall that’s as invisible as it is solid.

A finance department offers a textbook example. One team I worked with was desperate to automate reconciliations between payments and invoices. Their dream was simple: trigger a workflow whenever third-party records hit specific thresholds and surface the discrepancies inside Fabric online reports, no human intervention needed. Problem is, the out-of-the-box options just won’t trigger actions from third-party systems. They ended up exporting everything once a week, manipulating data elsewhere, and re-uploading results—using half a dozen tools and wasting hours they swore they’d reclaim.

And it’s not just anecdotal. If you talk to IT leads, you hear the same complaints: “We’re stuck in data silos unless someone scripts a workaround,” or “Our users end up running shadow processes because the main system can’t talk to the one they care about.” According to a recent industry survey, nearly 70% of tech teams using Fabric admit to at least weekly instances where manual exports or external automations become the only way around the platform’s connector blind spots. That means most organizations aren’t just bumping into limitations—they’re living with them as a part of regular business.

You might expect the technical leads to just throw more dashboards at the problem. That’s not what happens. Kenneth, a Fabric power user and Microsoft MVP, told me, “Outgrowing the UI isn’t a sign you’ve botched your deployment—it usually means your organization is finally asking more ambitious, cross-functional questions. Fabric’s UI lets you build a lot, but real change starts when you run into its edges and need to extend the platform for your business.” He’s seen the UI carry organizations surprisingly far, but every serious analytics team he’s worked with eventually stumbles on the same set of issues: unsupported data sources, complicated business rules, and a need to automate things that can’t be done with point-and-click alone.

It all starts to feel like being handed a Swiss Army knife, only to discover that half the blades are glued shut the moment you try to use them. You get the basics—sure, you can cut and slice the simple stuff. But when you want to carve out something unique, you’re reaching for a tool that stubbornly refuses to open. It’s frustrating, and it begs the question a lot of us whisper in hallway conversations: “Is this it? Or is there another way in?”

There is, but first you have to recognize that running into these walls is normal—honestly, if your organization isn’t hitting them, it might mean you’re not pushing the platform hard enough yet. Outgrowing Fabric’s interface isn’t some embarrassing confession, it’s a signpost that you’ve matured your analytics thinking. It means the business sees value in data and is actually hungry for more—richer automation, tighter integrations, the ability to layer on complex calculations, or simply to stop using Excel as a bandage for what the platform can’t do.

So what does that signal look like? It usually starts small: someone’s asking for a connector that isn’t there, or they’re emailing exports around. Later it’s, “Can we automate this?” or “Can Fabric talk to our custom API?” The moment those questions cross your inbox, you know you’ve outgrown the point-and-click world. And that’s not a problem—it’s your analytics practice leveling up. Now, the important part is figuring out exactly when you need to shift from using what’s built-in to designing your own connections. That’s when custom APIs come off the shelf and start playing a starring role. And the clearest sign it’s time for that step? Well, let’s look at what really tells you when an API-driven approach is unavoidable.

When APIs Become Essential: Spotting the Signs and Selecting Use Cases

If you’ve ever tried to plug a niche data source into Microsoft Fabric, you know that moment when the list of built-in connectors stops just short of what you really need. The UI brags about out-of-the-box coverage, but somehow your legacy inventory system—or that homegrown app the business can’t live without—sits on the outside looking in. It feels like waiting at a bus stop, watching bus after bus roll by, and none are going where you want. The connectors get you ninety percent there, but then you’re left looking at rows of data you can’t pull in and processes you can’t stitch together.

A lot of teams run into this when they first try to automate regular reporting or need to merge operational data with the rest of the analytics pie. You’ve got execs who want week-over-week reporting, or product managers who need up-to-the-minute stats, but the system you’re pulling from wasn’t on Microsoft’s radar when they wrote the default integrations. Suddenly, that refresh schedule you counted on only updates data every night—or worse, whenever someone feels like running a manual extract and upload. There’s nothing quite like standing up a fancy dashboard, only to have someone ask, “Why is this number two days old?” and realizing the workflow breaks down where Fabric’s connectors can’t follow.

But it doesn’t just stop at reporting delays. Think about scenarios with a little more complexity: an operations lead gets handed a directive to display live sensor data from production lines, tracking machines and environmental factors in real-time. The IoT system uses a custom API—something built in-house, patched together with the manufacturer’s instructions, and absolutely not supported out-of-the-box in Fabric. The operations team can see what their dashboard could be, but every attempt to automate the flow hits a hard wall. That’s when you start seeing people paste JSON files into chat, pass CSVs over email, and set up “temporary” FTP drops that turn into year-long solutions nobody wants to maintain.

The frustration is real. The native tools look impressive lined up in the UI, and at first glance, you might bet on drag-and-drop covering every scenario. But there is always that one system—usually critical, always demanding—where no friendly wizard or pre-built dropdown appears. Ryan, an analytics architect I worked with, summed it up well: “Drag-and-drop is great until the business asks a question no connector was built to answer.” The line between what’s possible and what’s practical gets blurry, and before you know it, technical leads are spending more time with Python scripts and Power Automate flows than they ever planned.

You can usually spot when a team is ready—or overdue—to step up to custom APIs. The first giveaway is seeing repeated data exports, especially if they’re happening at odd hours or showing up as a flood in email inboxes. If someone is scheduling manual uploads just to keep a dashboard alive, or if the workflow depends on Bob remembering to download a file before his morning coffee, that’s a major red flag. Another sign is the quick spread of third-party automation hacks. When IT policies are stretched to make daily handoffs between systems work, there’s a good chance the security team is losing sleep. I’ve seen entire processes run from unsanctioned cloud storage or, worse, from someone’s personal OneDrive because it’s the only way to keep two systems in sync.

Maybe the most telling indicator is the question of data security and compliance. Manual processes, by their nature, are hard to track and even harder to audit. Every time you export sensitive data and juggle it outside of Fabric, you introduce risk—accidental exposure, incomplete records, or failing to meet regulatory promises. At first, these look like operational headaches; over time, they can become a real liability, especially as audits get more stringent.

So where do custom APIs come in? They’re the bridge between what Fabric was designed to do and what your business actually needs. Instead of wrestling with workaround after workaround, APIs let you automate exactly the data pulls, transformations, and syncs that matter most. You build a direct pipeline between your core platforms and Fabric, wiring together live or near-real-time streams that would otherwise never co-exist. This isn’t just about convenience—it means dashboards that are fed by the actual systems of record, not a week-old copy living somewhere in a spreadsheet.

There’s a powerful example in the retail sector. A mid-sized chain needed to tie in real-time shop floor activity from their older POS alongside e-commerce orders. The systems shared almost nothing by default, but their tech team built a custom API that piped both data sources through Fabric, merging transactions and inventory in ways sales teams had only imagined. The result wasn’t just fresher dashboards—it unlocked cross-channel promotional analytics, proactive inventory stocking, and better customer segmentation. What looked like an integration headache turned into an advantage over competitors stuck in their own silos.

Recognizing when APIs move from “nice-to-have” to “must-have” comes down to those patterns: workflows stalling for lack of a connector, users inventing risky automations, or execs demanding what the UI flat-out can’t deliver. When those become your daily reality, skipping APIs isn’t a shortcut—it’s giving up ground the business needs to cover.

With that in mind, it’s one thing to know you need a custom connection, but a whole different challenge to actually get it up and running securely. A lot of API projects falter right out of the gate because the technical foundation isn’t set up for the realities of authentication, permissions, and monitoring. So let’s dig into what solid API development looks like—from tooling to security—before you write a single line of code.

From Concept to Code: Setting Up a Secure Fabric API Environment

If you’ve ever tried to get a Fabric API project off the ground, you know all the “quick start” guides tend to read like a highlight reel. They leave out the messy bits—the permissions requests that vanish in some manager’s inbox, the dev tools nobody knows if they actually need, and the parade of security settings that somehow still end up wide open. You don’t hear about the hours spent tracking down which account owns a workspace or why your access token keeps expiring mid-sync. The reality is, most teams stall out on setup before a single custom metric appears anywhere near a dashboard. That’s because getting fabric-ready is less about code and more about negotiating the right access, knowing which Azure lever to pull, and having a way to backtrack when things inevitably start failing.

It usually starts with permissions. Someone tries to register an app or a service principal, and suddenly HR is involved because nobody can remember who granted admin rights to that group last quarter. Or, you request access to a sensitive data lake, the process lags, and the initial project energy fizzles. Even basic read permissions on a premium workspace can spiral into days of clarification emails. Every Fabric API integration needs this clear line through your organization’s permissions maze. Otherwise, you’re left with partial access or, worse, stumbling into datasets you shouldn’t have seen at all.

Once you’re through the permissions gauntlet, tools become the next hurdle. Microsoft loves to offer several ways to do the same thing, so you end up staring at the Power BI REST API docs, Dataverse connectors, and SDK options for Python and .NET, all claiming to be “simple” for developers. The temptation is to install everything—including the CLI, whichever Visual Studio flavor is on hand, and a set of half-documented plug-ins someone suggested in a forum. But seasoned teams pare it back. For most Fabric API projects, the core stack usually includes Azure CLI for resource management, the Microsoft Authentication Library (MSAL) for handling tokens, plus whichever SDK matches your team’s language of choice. Power BI teams lean on the Power BI .NET SDK or the REST API directly. The fewer moving parts at the start, the less likely you are to spend sprint after sprint debugging environment issues instead of building features.

And then there’s authentication—the part almost everyone underestimates. Picking between Azure Active Directory, service principals, or managed identities isn’t just about preference. Each approach sets the guardrails for access and exposes your project to different risks. For instance, using a service principal offers automation, but requires careful scoping so that a single secret leak doesn’t hand over the keys to all your workspaces. You might think managed identities are just for VMs, but they’re showing up more in Fabric for locking down automated workloads without hoarding new secrets in a vault somewhere.

A classic trap looks something like this: a data automation team wires up access using a shared app registration and sets secrets to never expire. The project looks unstoppable—for about two weeks. Then, the token expires mid-refresh and half your scheduled processes fail silently. Or, you find out that throttling limits on the API are hitting because you skipped incremental authentication. It’s the modern equivalent of leaving a backdoor propped open: things are easier until audit season rolls around and someone asks, “Who just downloaded every sales record for fiscal year 2021?” That’s the moment you realize security isn’t just about passwords or firewall rules. It’s deciding from day one who can do what, how much they can do, and whether anyone will notice if something goes sideways.

Talking with Fabric architects, you hear the same advice over and over: keep access focused. Grant the smallest set of permissions necessary for your integration to run, and log every access request somewhere visible. A lot of projects start with broad permissions—just to “get working”—and never rein them in later. It’s an easy way to add technical debt with a side of compliance risk. Audit trails are another frequent blind spot. A Fabric developer told me, “We spent months fixing a broken process that only existed because nobody ever checked who was creating and deleting datasets.” Establish logging early so you know not only what happened, but who did it and when.

The choice of authentication method might sound trivial, but consider how often projects crumble from secrets left in code, credentials shared over chat, or, worse, hardcoded service accounts that outlive their creators. Azure AD with incremental consent is the recommended pattern, especially for larger teams or integrations that touch sensitive data. Managed identities bring even tighter control, but you need buy-in from IT and a willingness to run processes under their own identity, not just a shadow admin account. Build your initial scaffolding like it’s a permanent foundation because, in the real world, every integration you deliver becomes someone else’s production lifeline.

Think of it like the difference between building on a rocky beach versus poured concrete. Fast projects start out easy, but you’ll see pieces start to slip as soon as you run into API rate limits, multi-region compliance rules, or have to rotate secrets. The few extra hours spent wiring up well-scoped permissions and robust logging will save your team days—or weeks—down the line every single time. The goal isn’t just to get data flowing, but to build something that doesn’t need an overhaul the first time your org changes security policy or upgrades an Azure tenant.

So, before you even start writing custom API calls, make a checklist: confirm ownership of the source data and workspaces, get the right permissions with minimal scope, pick a secure authentication method, set up the tools your devs actually understand, and start logging everything from the get-go. That’s a hands-on recipe for surviving that crucial first phase of Fabric API development. Once this groundwork is laid, you finally get to the exciting part—actually embedding custom analytics into business workflows and seeing users interact with live, integrated insights. What’s that look like in action? Let’s break it down.

Power BI Models Unleashed: Embedding Custom Analytics and Real-Time Insights

We’ve all heard the pitch—connect new data, feed it into Power BI, and suddenly the business transforms overnight. The truth is a little less cinematic. Getting an API talking to Fabric is just the starting point. The real value comes when a team figures out how to wrangle all that live data into models that actually make products smarter, sales lifecycles shorter, and support teams a step ahead instead of one behind. The setup might sound familiar: the business wants to blend sources that don’t naturally play together—like logistics events from a vendor API and live revenue data flowing into Fabric. Everyone nods at the UI dashboards, but then the questions start piling up. Can the reports update instantly, or are we stuck with overnight refreshes? What about those metrics that only exist because of a three-step allocation rule no built-in measure understands? The dream is a dashboard that tells you what just happened, not what shipped out yesterday.

It’s the limitation you don’t always see coming. Most people hit it with calculations that don’t cleanly fit into the native data model, or when they want an experience embedded somewhere users actually work—think Teams chats, CRM pages, or even a row in an internal app. The standard Power BI layer attached to Fabric can aggregate, filter, and display. But when you reach for custom business logic—maybe twelve different data sources, or a calculation chained across systems—not even the fanciest slicer saves you. The best you can do with default tools is a weekly report or a static view that feels outdated by the time it hits someone’s inbox.

Now, here’s where the approach changes for teams that want analytics to be an engine, not just a rear-view mirror. Advanced teams start by layering Power BI models right on top of their integrated Fabric data. This isn’t just about adding visuals. By building out semantic models, you control how metrics are calculated, refreshed, and secured. It’s what lets you weave in custom business rules, keep calculations in sync as data arrives, and even override built-in columns that aren’t quite right. Instead of relying on the UI’s canned logic, these teams put their key analytics logic right into the model layer, mapped cleanly to both Fabric and any external APIs they’ve wired in. You can refresh models on your schedule—maybe every hour or, if the API supports it, nearly live. It’s as if all those disparate data streams become one curated view of reality, ready for business users.

Take the story of a sales manager who’s tasked with tracking both e-commerce and brick-and-mortar sales in real time. The e-commerce platform sits in Fabric, but the in-store data comes from an API built by a small dev team nobody’s ever met. Out of the box, there’s no way to merge, filter, and chart both streams together—at least not quickly or securely. But with a purpose-built Power BI semantic model, they connect the bespoke API directly, design measures that match nuanced business targets, and embed the working dashboard right into a Teams channel. Now, when someone asks mid-afternoon for the current sales leader or wants to filter by a high-value customer segment, they see results from both systems—blended live and ready for action.

Here’s how it usually rolls out in practice. First, developers register their custom API in Azure, wrapping it with the right permissions and security controls we covered earlier. Next, they use Power Query or a similar ETL tool in Fabric to connect and transform the incoming data, cleaning and shaping it as it arrives. From there, they build a semantic model in Power BI Desktop—adding the data sources, defining complex measures, and linking everything with clear relationships. Once the model reflects the realities of the business, publishing to the Power BI Service makes it accessible. Embedding comes last: using Teams, SharePoint, or even a line-of-business app, the team drops dashboards and reports wherever users are already working. Now, the analytics aren’t an extra stop—they’re baked into daily workflows.

A benefit that frequently surprises organizations isn’t just flexibility, but governance. When you centralize calculations and business rules into the Power BI semantic model—rather than scattering logic across spreadsheets or hidden behind UI widgets—you get a single source of truth. Adjust a metric definition in one place, and every dashboard, report, and embedded experience inherits that change. Security is easier, too: you lock down sensitive columns, control who can view which measure, and audit access with precision. You don’t have to worry about someone copying a sensitive dataset into unknown hands, because the only route to the raw numbers is through managed, secured layers.

The day-to-day impact starts to shift as well. Reporting becomes more proactive; alerts can trigger as soon as a KPI slips below target instead of waiting for a stale weekly rundown. Predictive models can use historical and API-fed live data in the same workspace, feeding automated recommendations into sales or support. Teams get away from “what happened” and start working with “what’s likely next,” moving from rear-view analytics to a true decision support engine. Over time, business users stop asking when the data will be ready—it’s just there, alive, refreshed, and responsive to their needs.

In other words, this is how you take Fabric from just another data storage solution to a real analytics hub—tailored to your organization, managed by your team, and capable of powering business growth without a patchwork of spreadsheets and stopgaps holding it together. The big question that comes up next: what does this all mean for the wider organization that’s finally ready to leave the basics behind? Because there’s a bigger lesson here for anyone considering what to build next.

Conclusion

If you’ve spent enough time with Fabric’s UI, you know where it ends and custom needs begin. True progress comes when you stop working around the limitations and start addressing what actually matters for your business. APIs and Power BI models aren’t just optional add-ons—they’re the real growth drivers that help teams ask and answer those tougher questions. Building beyond the UI isn’t out of reach; it’s often the next logical step if you want systems that flex with your goals. Let us know about your integration struggles, and keep an eye out for more practical walkthroughs in future episodes.

Discussion about this episode

User's avatar