M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Stop Using Power BI Wrong: The $10,000 Data Model Fix
0:00
-13:31

Stop Using Power BI Wrong: The $10,000 Data Model Fix

Opening – The $10,000 Problem

Your Power BI dashboard is lying to you. Not about the numbers—it’s lying about the cost. Every time someone hits “refresh,” every time a slicer moves, you’re quietly paying a performance tax. And before you smirk, yes, you are paying it, whether through wasted compute time, overage on your Power BI Premium capacity, or the hours your team spends waiting for that little yellow spinner to go away.

Inefficient data models are invisible budget vampires. Every bloated column and careless join siphons money from your department. And when I say “money,” I mean real money—five figures a year for some companies. That’s the $10,000 problem.

The fix isn’t a plug‑in, and it’s not hidden in the latest update. It’s architectural—a redesign of how your model thinks. By the end, you’ll know how to build a Power BI model that runs faster, costs less, and survives real enterprise workloads without crying for mercy.


Section 1 – The Inefficiency Tax

Think of your data model like a kitchen. A good chef arranges knives, pans, and spices so they can reach everything in two steps. A bad chef dumps everything into one drawer and hopes for the best. Most Power BI users? They’re the second chef—except their “drawer” is an imported Excel file from 2017, stuffed with fifty columns nobody remembers adding.

This clutter is what we call technical debt. It’s all the shortcuts, duplicates, and half‑baked relationships that make your model work “for now” but break everything six months later. Every query in that messy model wanders the kitchen hunting for ingredients. Every refresh is another hour of the engine rummaging through the junk drawer.

And yes, I know why you did it. You clicked “Import” on the entire SQL table because it was easier than thinking about what you actually needed. Or maybe you built calculated columns for everything because “that’s how Excel works.” Congratulations—you’ve just graduated from spreadsheet hoarder to BI hoarder.

Those lazy choices have consequences. Power BI stores each unnecessary column, duplicates the data in the model, and expands memory use exponentially. Every time you add a fancy visual calling fifteen columns, your refresh slows. Slow refreshes become delayed dashboards; delayed dashboards mean slower decisions. Multiply that delay across two hundred analysts, and you’ll understand why your cloud bill resembles a ransom note.

The irony? It’s not Power BI’s fault. It’s yours. The engine is fast. The DAX engine is clever. But your model? It’s a tangle of spaghetti code disguised as business insight. Ready to fix it? Good. Let’s rebuild your model like an adult.


Section 2 – The Fix: Dimensional Modeling

Dimensional modeling, also known as the Star Schema, is what separates a Power BI professional from a Power BI hobbyist. It’s the moment when your chaotic jumble of Excel exports grows up and starts paying rent.

Here’s how it works. At the center of your star is a Fact Table—the raw events or transactions. Think of it as your receipts. Each record represents something that happened: a sale, a shipment, a login, whatever your business actually measures. Around that core, you build Dimension Tables—the dictionary that describes those receipts. Product, Customer, Date, Region—each gets its own neat dimension.

This is the difference between hoarding and organization. Instead of stacking every possible field inside one table, you separate descriptions from events. The fact table stays lean: tons of rows, few columns. The dimensions stay wide: fewer rows, but rich descriptions. It’s relational modeling the way nature intended.

Now, some of you get creative and build “many‑to‑many” relationships because you saw it once in a forum. Stop. That’s not creativity—that’s self‑harm. In a proper star, all relationships are one‑to‑many, pointing outward from dimension to fact. The dimension acts like a lookup—one Product can appear in many Sales, but each Sale points to exactly one Product. Break that rule, and you unleash chaos on your DAX calculations.

Let’s talk cardinality. Power BI hates ambiguity. When relationships aren’t clear, it wastes processing power guessing. Imagine trying to index a dictionary where every word appears on five random pages—it’s miserable. One‑to‑many relationships give the engine a direct path. It knows exactly which filter context applies to which fact—no debates, no circular dependencies, no wasted CPU cycles pretending to be Sherlock Holmes.

And while we’re cleaning up, stop depending on “natural keys.” Your “ProductName” might look unique until someone adds a space or mis‑types a letter. Instead, create surrogate keys—numeric or GUID IDs that uniquely identify each row. They’re lighter and safer, like nametags for your data.

Maybe you’re wondering, “Why bother with all this structure?” Because structured models scale. The DAX engine doesn’t have to guess your intent; it reads the star and obeys simple principles: one direction, one filter, one purpose. Measures finally return results you can trust. Suddenly, your dashboards refresh in five minutes instead of an hour, and you can remove that awkward ‘Please wait while loading’ pop‑up your team pretends not to see.

Here’s the weird part—once you move to a star schema, everything else simplifies. Calculated columns? Mostly irrelevant. Relationships? Predictable. Even your DAX gets cleaner because context is clearly defined. You’ll spend less time debugging relationships and more time actually analyzing numbers.

Think of your new model as a modular house: each dimension a neat, labeled room; the fact table, the main hallway connecting them all. Before, you had a hoarder’s flat where you tripped over data every time you moved. Now, everything has its place, and the performance difference feels like you just upgraded from a landline modem to fiber optics.

When you run this properly, Power BI’s Vertipaq engine compresses your model efficiently because the columnar storage finally makes sense. Duplicate text fields vanish, memory usage drops, and visuals render faster than your executives can say, “Can you export that to Excel?”

But don’t celebrate yet. A clean model is only half the equation. The other half lives in the logic—the DAX layer. It’s where good intentions often become query‑level disasters. So yes, even with a star schema, you can still sabotage performance with what I lovingly call “DAX gymnastics.” In other words, it’s time to learn some discipline—because the next section is where we separate the data artists from the financial liabilities.

Section 3 – DAX Discipline & Relationship Hygiene

Yes, your DAX is clever. No, it’s not efficient. Clever DAX is like an overengineered Rube Goldberg machine—you’re impressed until you realize all it does is count rows. You see, DAX isn’t supposed to be “brilliant”; it’s supposed to be fast, predictable, and boring. That’s the genius you should aspire to—boring genius.

Let’s start with the foundation: row context versus filter context. They’re not twins; they’re different species. Row context is each individual record being evaluated—think of it like taking attendance in a classroom. Filter context is the entire class after you’ve told everyone wearing red shirts to leave. Most people mix them up, then wonder why their SUMX runs like a snail crossing molasses. The rule? When you iterate—like SUMX or FILTER—you’re creating row context. When you use CALCULATE, you’re changing the filter context. Know which one you’re touching, or Power BI will happily drain your CPU while pretending to understand you.

The greatest performance crime in DAX is calculated columns. They feel familiar because Excel had them—one formula stretched down an entire table. But in Power BI, that column is persisted; it bloats your dataset permanently. Every refresh recalculates it row by row. If your dataset has ten million rows, congratulations, you’ve just added ten million unnecessary operations to every refresh. That’s the computing equivalent of frying eggs one at a time on separate pans.

Instead, push that logic back where it belongs—into Power Query. Do your data shaping there, where transformations happen once at load time, not repeatedly during report render. Let M language do the heavy lifting; it’s designed for preprocessing. The DAX engine should focus on computation during analysis, not household chores during refresh.

Then there’s the obsession with writing sprawling, nested measures that reference one another eight layers deep. That’s not “modular,” that’s “recursive suffering.” Every dependency means another context transition the engine must trace. Instead, create core measures—like Total Sales or Total Cost—and build higher‑order ones logically on top. CALCULATE is your friend; it’s the clean switchboard operator of DAX. When used well, it rewires filters efficiently without dragging the entire model into chaos.

Iterator functions—SUMX, AVERAGEX—are fine when used sparingly, but most users weaponize them unnecessarily. They iterate row by row when a simple SUM could do the job in one columnar sweep. Vertipaq, the in‑memory engine behind Power BI, is built for columnar operations. You slow it down every time you force it to behave like Excel’s row processor. Remember: DAX doesn’t care about your creative flair; it respects efficiency and clarity.

Now about relationships—those invisible lines you treat like decoration. Single‑direction filters are the rule; bidirectional is an emergency switch, not standard practice. A bidirectional relationship is like handing out master keys to interns. Sure, it’s convenient until someone deletes the customers table while filtering products. It invites ambiguity, force‑propagates filters, and causes calculations to unexpectedly balloon. Keep relationships single‑directional and deliberate. You can always use CROSSFILTER or TREATAS inside CALCULATE when you really need bidirectionality—but do it consciously, not by default.

Circular relationships? Don’t even start. They’re the Bermuda Triangle of Power BI—once entered, performance and sanity vanish. Always prefer clear hierarchy: dimensions filter facts, never the other way around. If you find yourself needing the facts to filter dimensions, your model design is upside down; revert to the fundamentals.

Finally, test like an engineer. Use DAX Studio, measure execution times, trace query plans. If a measure builds suspense longer than a Netflix intro, rewrite it. Consistent refresh times under three minutes aren’t fantasy—they’re the side effect of respect for context and relationship hygiene.

At this point, your model should hum instead of groan. The relationships are tidy, DAX is disciplined, refreshes finish before your coffee cools. You’ve reduced compute costs, shortened refresh windows, and spared your team another all‑nighter shouting at a spinning circle. Now that everything actually works as intended, let’s quantify how much money your newfound discipline just saved you.

Section 4 – The $10,000 ROI

All right, let’s stop pretending this is just about elegance and pride of craftsmanship. You didn’t come here for art—you came for ROI. Because under every messy Power BI model lurks a surprisingly measurable drain on money and time.

Let’s start with the arithmetic. Two hundred analysts, each losing roughly five minutes a day waiting for reports to refresh or visuals to load. Seems nil, right? Multiply that by 260 workdays, by average hourly wage—and suddenly that delay costs about ten thousand dollars a year. With one dataset. One! Most enterprises juggle dozens. It’s astonishing how quickly inefficiency compounds when multiplied by headcount.

But it isn’t just wages bleeding out—it’s compute. Each bloated model slams your Premium capacity, demanding more memory and CPU, even when rendering something trivial like a monthly sales slice. Azure doesn’t care that you love your extra columns; it charges you for the milliseconds they consume. Optimize your data model, and CPU cycles drop. Lower CPU load equals smaller capacity nodes or longer time before scaling. Congratulations, you’ve just engineered a cost-cutting policy—disguised as a technical improvement.

Performance equals productivity. Dashboards that refresh in seconds instead of minutes encourage real-time experimentation. Executives make decisions faster. Developers iterate instantly instead of refresh–wait–debug–coffee. When systems respond quickly, people trust them, and trust translates to usage. Ever notice that users abandon dashboards that lag but evangelize the fast ones? The difference is adoption friction, and friction costs adoption dollars.

One global manufacturer refactored its monstrous warehouse model—twelve-hour nightly refresh reduced to forty-five minutes. The CFO didn’t care about star schemas or surrogate keys; he cared that analysts stopped staging midnight check-ins to babysit refreshes. Faster cycles meant faster insights, meant fewer service failures hiding under stale data. That single restructuring freed resources equivalent to a full-time salary and trimmed compute by 30%.

So here’s the irony—the efficiency you treat as “nice to have” is actually a financial instrument. Clean models are cheaper to run, easier to audit, simpler to version, and faster to extend. Try implementing governance or version control in a tangled bowl of relationships; it’s like documenting spaghetti. With structured modeling, you have discrete tables, clear joins, and auditable transformations. Compliance officers adore it, and so will your infrastructure team.

Think of optimization as debt repayment. Each redundant column removed, each relationship clarified, is one payment toward balance. Ignore it, and the interest shows up in lost hours and unpredictable outages. Fix it, and you build credit—technical credit. Eventually your BI environment stops being a liability and starts compounding returns, because time saved on refreshes becomes time spent on strategy.

So yes, Power BI modeling can save you $10,000 or more—but the real payoff isn’t the money. It’s confidence. Confidence that your data behaves predictably under pressure. That your dashboards can scale with demand. That when someone asks, “Can we double this dataset?” your response isn’t nervous laughter.

And here’s the kicker—once you’ve tasted that smooth performance, you’ll never tolerate the old way again. Every laggy refresh becomes personal offense. Every circular relationship feels like vandalism. That’s progress. You’ve transitioned from data consumer to responsible architect.

So, fix your model before it bills you again.


Conclusion – Reset or Pay the Tax Again

Inefficient Power BI models are silent parasites. They feed on memory, patience, and payroll without leaving a receipt. Dimensional modeling isn’t theory—it’s vaccination. It prevents outbreak-level data chaos before it starts.

Audit your existing model. Hunt down your biggest offender: one oversized fact table, one over‑calculated column, one bidirectional relationship that shouldn’t exist. Rebuild that part properly today. You’ll see the effect immediately—the chart loads faster, the workbook shrinks, the refresh finishes before your next complaint.

And then, the uncomfortable truth settles in: the tool was never slow. You were. Your design was. Power BI simply mirrors the quality of your thinking.

If this explanation clawed back even fifteen minutes of your week, you owe yourself more. Subscribe. Not out of gratitude—out of pragmatism. Because the next video might shave another hour off your workflow, and that’s an ROI your finance department will actually understand.

Reset your model, reclaim your time, and stop paying the inefficiency tax.

Discussion about this episode

User's avatar