M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Using Dynamics 365 Finance Data in Fabric for Financial Forecasting
0:00
-23:03

Using Dynamics 365 Finance Data in Fabric for Financial Forecasting

Ever wonder why your rolling forecast in D365 F&O never matches reality? If your finance data lives in silos and your dashboards pull from five different places, you're not alone. Let’s break down exactly how Microsoft Fabric unifies those scattered modules—GL, sub-ledgers, budgets—so your next forecast finally lines up with what really happens.

Stay tuned to see how connecting the dots reveals insights you’ve been missing and makes variance reports practically build themselves.

Why Your D365 F&O Data Feels Disjointed—And Why That Matters

If you’ve ever tried to explain a variance report from D365 F&O, you know that uneasy silence when you realize the numbers you’re quoting all trace back to a different source file. One line item was a last-minute export from the general ledger; another came from a sub-ledger dump downloaded three days earlier; and the budget numbers? Those lived in a spreadsheet that’s bounced between three inboxes since Monday. There’s that moment you say, “Let me get back to you on that,” and hope you can make all these numbers add up by tomorrow’s meeting. This is the story for most finance teams living with Dynamics 365 Finance & Operations day after day. The tools promise a single ERP, but the reality is more like a collection of narrow silos that only truly fit together in PowerPoint.

Let’s talk about why these islands exist. In D365 F&O, the general ledger keeps a record of every journal entry—your overall financial story, but only at the highest level. Sub-ledgers log all the gritty transaction details—accounts payable, fixed assets, purchasing, sales. Good luck getting your GL trial balance to match the line-by-line details from the AP or AR sub-ledgers, especially if those modules close on separate schedules. Budgets, meanwhile, live in their own world, often managed as static files and uploaded just once each year. The system pulls reports from all these places, but each module uses different codes for cost centers, departments, or projects. Chart of accounts structures evolve, but not every module gets the same memo. A cost center code in the general ledger might not even exist in your asset register if someone forgot to update both places. In the end, we’re all running back and forth, trying to square off details from three different islands that insist on speaking their own dialects.

This patchwork creates a reporting nightmare. One month’s close cycle stretches out because the AP sub-ledger needs time to reconcile manual adjustments, and the general ledger team is still chasing missing postings. The budgets submitted by department heads last quarter now need “just a small tweak” before they match reality, which means firing up another round of copy-paste marathons in Excel. If you’re explaining variance numbers to your CFO, every slight mismatch raises eyebrows—“Why does actuals versus budget have a $22,000 gap here?”—even though you know that most of those gaps come from timing delays between modules, not true business performance. The pressure mounts when audit season rolls around and no one can agree which table is the actual source of truth. There’s nothing quite like realizing the numbers you signed off on came from last week’s backup, not the current system.

Outside the office, research confirms what you already feel. Gartner recently pointed out that organizations with fragmented financial data spend up to 50% more time on routine reporting than peers with unified data. That’s not an accounting quirk; it’s a direct cost in lost productivity. While D365 F&O’s sales pitch promises full integration, what you usually get is a set of modules engineered separately and merged along the way with a lot of manual patchwork. APIs exist, sure, but getting every piece to talk flawlessly—without middleware breaking after each update—requires more patience than most IT budgets allow. That’s why most of us quietly rely on exported spreadsheets and the legendary index-match formula to chase down the truth.

It’s not just theoretical. Picture this: it’s 10:37 p.m. You’re hunched over your laptop, juggling the latest GL download and a subtotal pasted from last month’s fixed asset summary. You notice the numbers don’t quite line up—a $4,760 difference that shouldn’t be there. So, you scroll through the AP ledger, wondering who posted that late invoice adjustment. Somewhere in that sea of decimals, reality got a little fuzzier. Maybe your budget owner will spot it, maybe not—but you know it’s one more Achilles heel in the forecast you’re expected to explain tomorrow.

Teams run into these disconnects every month. Reporting cycles slow down. Forecasting accuracy drifts. Worst case, decision-makers miss a trend because half the data was stale by the time the variance report landed on their desk. Nobody enjoys living in the world of after-hours Excel edits, hiding cell formulas, and triple-checking pivot tables for errors that shouldn’t even exist. It’s like modern ERP software dressed up with retro manual processes under the hood.

Now, if you’re wondering why D365 F&O can’t unify everything out of the box: you’re not alone. Integration feels logical in demos, but real business data is messy. Chart of account codes change mid-year, departments get renamed, and project IDs don’t sync between modules unless you script specialized connectors. Most teams settle for “close enough” and hope the next release will really, finally, bring the dream of seamless reporting.

But what if you could actually see the whole picture, without living in Excel or hiring a small army to clean up last-mile exports? Here’s the reality: before you can forecast anything with confidence, you need to understand exactly how these modules interact—or don’t. That clarity is the first step to reports and predictions that real humans can trust. So, the obvious question: if the modules resist natively working together, can you make them collaborate somewhere else—maybe in Fabric? Let’s look at how these systems can finally start talking the same language, and what it really takes to pull together a unified pipeline.

Connecting the Dots: Building a Unified Data Pipeline with Fabric

Most finance teams have given up looking for a magic “sync all” button in D365 F&O. If you've ever seen your GL, AP, and project records living in their own corners, you're probably wondering if there’s any hope of turning that chaos into something actionable. That’s where Microsoft Fabric draws attention. It’s built for exactly this kind of problem—connecting raw data across business modules, pulling it into a single, unified model, and making reporting refresh itself instead of you racing toward another reporting deadline.

Now, skepticism here is totally reasonable. Everyone’s crossed their fingers and started a “pilot” data integration project, only for it to get abandoned once someone discovers three critical reports rely on an old export workflow. No one wants to break what mostly works, even if it means dealing with endless copy-paste sessions. The beauty of Fabric is that it doesn’t force a rip-and-replace approach. Instead, it sets up connections—think of it as Stackable pipes—pulling only what you need from each module, on a schedule you decide, and feeding it into a workspace designed for analytics and forecasting.

Let’s talk through the nuts and bolts of what extracting D365 F&O data actually means, because this step shapes everything after it. If you’re looking to build rolling forecasts, variance reports, or Power BI dashboards, three main groups of data matter most: the General Ledger (GL) journals, sub-ledger transactions (so, AP, AR, fixed assets), and your approved budgets or forecasts. In D365 F&O, you’ll usually be working with tables like LedgerJournalTrans for transactional GL detail, LedgerEntryJournal for your high-level GL movement, VendTrans and CustTrans for vendor and customer line items, and AssetTrans for tracking changes to fixed assets. Budget tables—often BudgetRegisterEntry—typically hold your planned figures, but they require careful mapping because budgeting structure can shift more than you’d expect. Pulling all this in isolation gives you fragments, but when combined through a proper pipeline, it’s the backbone of actually seeing where the money’s going.

Setting up an actual, automated pipeline isn’t just picking a data connector and hoping it’ll run every night. Fabric offers first-party connectors directly into D365 F&O, supporting direct connections over OData or via Synapse Link for even larger datasets, without the file download bottleneck. The process usually starts in Fabric Dataflows, where you authenticate with an organizational account that has table-level read permissions in D365. You then define each dataflow—pulling in the GL details, AP transactions, budgets—setting up schedules that match your reporting cadence. Data privacy is non-negotiable: you’ll want least-privilege access on every connection and API key, with Azure Active Directory handling most of the grunt work around authorization.

When you get to modeling, here’s where things come alive. Imagine pulling a complete chart of accounts out of D365, flattened and mapped to the rest of your organization’s dimensions—departments, business units, maybe even products. You’ll likely export a DimMainAccount table, but it doesn’t end there. Dimensions like business unit or project usually live in their own tables (think DimensionAttributeValueSet), so you need to join them, sometimes many-to-many, into one readable structure. The practical upshot: if your CFO wants to drill from a rolled-up P&L straight down to a particular project’s cost overrun, they can do it without asking you for “just one more export.”

It isn’t all roses at the transformation stage. Every team eventually hits the “why doesn’t this account exist in both places?” wall. You might find missing dimensions in sub-ledger entries, because a business unit field was left blank in AP but required in GL. Account codes drift, especially after reorganizations. And budgets from different departments—don’t be surprised if one uses “BU-01” while the other spells out “BusinessUnit01.” Out-of-sync period definitions mean your January actuals don’t line up with the January budget, which triggers endless back-and-forth.

This is where transformation and cleansing are more than technical chores. You’ll need to standardize codes, sometimes building translation tables during ETL to bridge old and new naming conventions. Missing data calls for set rules—should a blank cost center get tagged as “Unassigned,” or should the record be flagged for review before it hits your forecasts? Dataflows in Fabric support these checks, letting you script out logic for translation, imputation, and validation as part of the pipeline. No step is too mundane: I’ve seen more reports derailed by a lazy trailing space in an account code than by any formula error.

Once this pipeline is running smoothly, you’ll notice a fundamental shift. Data lands in a single lake—deduplicated, standardized, and up-to-date. Instead of pulling five different reports before every forecast meeting, you access a unified dataset that actually reflects what’s in D365 as of this morning. The reporting cycle contracts, and so does the margin for error. What used to take hours of massaging spreadsheets becomes refresh-and-go. Power BI dashboards built on this foundation don’t just visualize—they drill all the way back to the raw transaction if you need answers. The upshot? Analytics finally keeps up with the pace of the business, not the other way around.

All that said, unlocking clean, unified data is just the prelude. Once things are flowing reliably, you can finally stop worrying about feeding the pipeline and start building the forecast models everyone’s been asking for. But how does all this new data power a model that doesn’t break under real-world change? Let’s get into building rolling forecasts that adapt as quickly as your business does.

Forecasting That Actually Reflects Reality: From Data to Dynamic Models

If you’ve ever felt that sinking feeling when a single late transaction sends your entire forecast sideways, you’re in good company. There’s that odd frustration of finally wrestling a forecast into place, double-checking every cell, only to have real-world numbers come in and punch a hole straight through it. Spreadsheets that made sense two days ago suddenly go out of date, and the models you so carefully set up just can’t keep up. That’s not a spreadsheet problem—it’s a data problem. The real issue is your source of truth isn’t unified, so your rolling forecast is only as good as the last time you did a manual refresh.

Let’s focus on how having unified, real-time data turns that whole scenario on its head. As soon as you centralize D365 F&O details inside Fabric, you stop building forecasts on gut feel or stale data and start working from the actual, up-to-the-minute numbers. Every new journal entry, every sub-ledger transaction, every late budget tweak—it all feeds in directly, no more waiting until month-end or exporting for the hundredth time. When that unified dataset sits in Fabric, you gain the ability to run rolling forecasts that react to your business as quickly as it actually moves. That means your forecast isn’t an annual ritual or a quarterly headache—it's just part of day-to-day operations. You’re no longer patching together yesterday’s best guess. You’re asking, “What’s coming next?” and actually getting an answer you can trust.

If you’ve only ever used traditional forecasting tools, you probably know the pain points. Manual updates take hours, sometimes days. Each time you adjust resource plans or revenue projections, you hope you caught all the links—because one broken formula means hunting through dozens of sheets to see where things fell apart. Spreadsheets might have built-in functions, but they’re static snapshots. Even if you pull new data, it’s never quite in sync with what D365 F&O is showing you right now. The result? Leadership questions every output; teams hedge their numbers “just in case”; and variance reports end up patched together with explanations that only sort of make sense.

With Fabric, unified data unlocks a completely different toolkit for forecasting. You’re not limited to last period’s totals or basic trend lines. You can actually apply statistical methods at scale because your data foundation is consistent. Moving averages become meaningful when they update live with every fresh batch of transactions. Simple regressions let you identify how changes in one area—say, headcount or supply costs—drive shifts in margin or budget performance across the organization. Want to see how a change in customer payment behavior impacts quarterly cash flow? Plug it into a scenario model and see the outcome ripple through your entire rolling forecast, without starting from scratch.

Here’s the step-by-step reality: Once your data pipeline into Fabric is humming, you start by connecting your actuals—GL transactions, sub-ledgers, budgets—to an analytics workspace. There, you can build a rolling forecast model in just a few clicks. Set rules for how each piece of incoming data should update your projections. New AP invoices post? Actual spend rolls forward. Budget modifications come in? Forecasted totals adjust instantly. The model doesn’t have to sit idle or break—it absorbs every change and recalibrates automatically. You move from babysitting spreadsheets to managing a system that actually self-corrects.

Variance reporting is where you really see value. In the old world, variance reports meant a careful dance between exported D365 numbers and whatever the last saved forecast file had in it. Misalignments—sometimes down to the decimal—would mean disastrous back-and-forth before every major review. With Fabric unified forecasting, you’re comparing apples to apples. The same set of actuals transforms to feed both sides: your projections and your variance reports. Each time numbers update in D365, your dashboards auto-refresh, so those meetings that used to spiral into “whose number is right?” become practical discussions about the actual drivers of business change.

That’s also where Power BI dashboards step in. Instead of static tables, you’re looking at trends over time, breakdowns by department, and even instant flags for outliers—maybe a sudden spike in fixed asset spend or an unexpected drop in sales receipts. Power BI can draw directly from the Fabric model, so everyone—from analysts to execs—gets visuals that make sense and drill-down paths that never end with a “data not available” warning. These dashboards aren’t just prettier; they’re actionable. Spotting issues or opportunities is about three clicks away, not a marathon of Excel detective work.

It’s not just theory. Take a services firm that unified its D365 F&O and used Fabric-based rolling forecasts to catch a bad debt surge early. By comparing forecasted receipts to rapidly-changing actuals, their finance lead noticed receivables lengthening almost in real time. Instead of learning about the trend months later, they flagged it mid-quarter and worked with account managers to intervene. That’s the shift: forecasting as a living, breathing process, not a retrospective post-mortem.

When rolling forecasts and variance reports keep pace with your real-world changes, finance finally gets back to steering the business instead of running after it. But what about trust? When so much rides on automated data flow, how do you know it all stacks up? That’s where keeping your forecasts reliable and secure becomes just as important as building them.

Data Integrity and Governance: Keeping Your Forecasts Trustworthy

If you ever sat in a review meeting trying to defend a forecast, only to realize you can’t explain why the numbers look the way they do, you know how quickly trust dissolves. All the clean Power BI visuals in the world don’t matter if no one can answer the simple question, “Where did this number come from?” With D365 F&O and Fabric in the mix, you gain massive forecasting power, but the messy side effect is that responsibility for data doesn’t just double—it multiplies. Any time you move and transform sensitive financial data across systems, you open up cracks for mistakes to sneak in. Accidental changes, mismatched dimensions, and good old-fashioned old exports sitting quietly in a folder until someone pastes in obsolete figures—these are the real-world threats to your forecast’s credibility.

Let’s talk about why data integrity is more than a checkbox. In any D365 F&O to Fabric pipeline, the very flexibility that makes analytics possible is also what introduces risk. Someone updates a cost center mapping in the GL, but forgets to revisit old sub-ledger config. Department codes drift as teams re-org, but there’s no process to update historical budget records to match. Maybe a dataflow fails and last week’s incomplete export sneaks into the pipeline, giving you a variance report that’s both technically correct and completely misleading. Even one of these small slips can snowball, triggering hours of fire drills or, worse, leadership calling into question the whole process the next time numbers don’t add up.

The first line of defense is making sure every number in your forecast can be traced—back through all its transformations—to the original entry in D365 F&O. Fabric helps by tracking data lineage automatically. That means, for every dataset, you can see where it started, every transformation it passed through, and what schedule or user triggered the update. If someone asks, “Why is R&D showing a spike this quarter?” it takes two clicks to see not just the transaction detail, but exactly how that entry moved from invoice to report. This isn’t optional—it’s the only way to be confident your automation is working for you, not against you.

Access management steps in as another critical guardrail. Fabric lets you build fine-grained access controls at every stage of the pipeline. You decide who can see raw D365 F&O records, who can touch transformed and modeled datasets, and who can actually publish or update Power BI dashboards. Financial data is sensitive by nature; you don’t want an intern with the ability to overwrite all your Q1 actuals, or have a third-party report builder poking around in payroll tables. Permissions tie in with Azure Active Directory, using security groups and role assignments, so access isn’t managed with emails and spreadsheets—it’s integrated into your everyday identity management.

Audit logs are your receipt trail. Every significant action—import, transformation, even visualization refresh—gets logged and timestamped. That means after a questionable forecast or a sudden swing in reported results, you don’t have to play detective. You just pull up the logs and see exactly who made what change, when, and (thanks to data lineage) what downstream effect it had. It’s not glamorous work, but ask anyone who’s been through an audit: when numbers are on the line, proof is priceless.

Security isn't just about locking things down, either. Fabric brings built-in encryption—both at-rest and in-transit—so sensitive data doesn’t travel unprotected. If you’re storing bank details, payroll, or supplier invoices, data stays encrypted whether it’s sitting in a warehouse or flowing through a report. Add in continuous monitoring, and you’ve got alerts for unusual behavior—like someone trying to extract unusually large numbers of records or attempts to access restricted tables. These aren’t just features you turn on; they’re part of maintaining trust with finance, audit, and compliance teams alike.

You see the value of this the moment something goes wrong. Picture an analyst importing an old backup file into the budget table, not realizing it’s a year out of date. Without a data lineage view, you might never catch the mistake—until forecasted spend looks bizarre and the finance chief demands answers. In a pipeline with proper controls, you spot the anomaly within minutes: wrong file, wrong timestamp, flagged in a validation check. Disaster averted before it makes it to the board deck.

Documentation is the glue holding all this together. If process steps live only in someone’s head—or a dusty SharePoint folder—turnover or absence means you risk losing critical context. Fabric’s dataflows support inline documentation, and you can plug workflow steps directly into Power Platform’s automation tools. That means, even as processes change, you’re not left guessing how yesterday’s data got here or what logic drives each step.

Ongoing data quality checks round out the picture. Set up validation rules that scan for out-of-range values, missing cost center codes, or transactions posting to unexpected periods. Build alerts so issues get flagged before they skew a forecast. No matter how tight your schedule, these small automations add up to a system that’s always watching—even when you’re not.

Confidence in your forecasts comes from knowing more than just the “what”—you have proof of the “how” and “why.” Every figure can be traced, every change can be explained, and your system stands up when questions get tough. That’s how data integrity transforms forecasting from a race against the spreadsheet clock into a business asset that evolves as fast as you do. And with each layer in place, you start to see how a truly adaptive financial forecasting system isn’t just possible—it’s within reach as your organization grows.

Conclusion

Most teams think unified data will solve everything, but the real change happens when your forecasts update as soon as your business does—no more chasing old spreadsheets or cross-checking exports. If you want to see actual results, it’s time to build systems that move at the speed of your operations. The tools are here, built on Microsoft 365 and Power Platform, and they’re designed for finance teams who need more than static snapshots on a slide. If seeing your complete financial picture—clean, current, and reliable—sounds better than late-night Excel fixes, you know what to do next.

Discussion about this episode

User's avatar