M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
How Data Goblins Wreck Copilot For Everyone
0:00
-18:00

How Data Goblins Wreck Copilot For Everyone

Picture your data as a swarm of goblins: messy, multiplying in the dark, and definitely not helping you win over users. Drop Copilot into that chaos and you don’t get magic productivity—you get it spitting out outdated contract summaries and random nonsense your boss thinks came from 2017. Not exactly confidence-inspiring.

Here’s the fix: tame those goblins with the right prep and rollout, and Copilot finally acts like the assistant people actually want. I’ll give you the Top 10 actions to make Copilot useful, not theory—stuff you can run this week. Quick plug: grab the free checklist at m365.show so you don’t miss a step.

Because the real nightmare isn’t day two of Copilot. It’s when your rollout fails before anyone even touches it.

Why Deployments Fail Before Day One

Too many Copilot rollouts sputter out before users ever give it a fair shot. And it’s rarely because Microsoft slipped some bad code into your tenant or you missed a magic license toggle. The real problem is expectation—people walk in thinking Copilot is a switch you flip and suddenly thirty versions of a budget file merge into one perfect answer. That’s the dream. Reality is more like trying to fuel an Olympic runner with cheeseburgers: instead of medals, you just get cramps and regret.

The issue comes down to data. Copilot doesn’t invent knowledge; it chews on whatever records you feed it. If your tenant is a mess of untagged files, duplicate spreadsheets, and abandoned SharePoint folders, you’ve basically laid out a dumpster buffet. One company I worked with thought their contract library was “clean.” In practice, some contracts were expired, others mislabeled, and half were just old drafts stuck in “final” folders. The result? Copilot spat out a summary confidently claiming a partnership from 2019 was still active. Legal freaked out. Leadership panicked. And trust in Copilot nosedived almost instantly.

That kind of fiasco isn’t on the AI—it’s on the inputs. Copilot did exactly what it was told: turn garbage into polished garbage. The dangerous part is how convincing the output looks. Users hear the fluent summary and trust it, right up until they find a glaring contradiction. By then, the tool carries a new label: unreliable. And once that sticker’s applied, it’s hard to peel off.

Experience and practitioner chatter all point to the same root problem: poor data governance kills AI projects before they even start. You can pay for licenses, bring in consultants, and run glossy kickoff meetings. None of it matters if the system underneath is mud. And here’s the kicker—users don’t care about roadmap PowerPoints or governance frameworks. If their very first Copilot query comes back wrong, they close the window and move on.

From their perspective, the pitch is simple: “Here’s this fancy new assistant. Ask it anything.” So they try something basic like, “Show me open contracts with supplier X.” Copilot obliges—with outdated deals, missing clauses, and expired terms all mixed in. Ask yourself—would they click a second time after that? Probably not. As soon as the office rumor mill brands it “just another gimmick,” adoption flatlines.

So what’s the fix? Start small. Take that first anecdote: the messy contract library. If it sounds familiar, don’t set out to clean your entire estate. Instead, triage. Pick one folder you can fix in two days. Get labels consistent, dates current, drafts removed. Then connect Copilot to that small slice and run the same test. The difference is immediate—and more importantly, it rebuilds user confidence.

Think of it like pest control. Every missing metadata field, every duplicate spreadsheet, every “Final_V7_REALLY.xlsx” is another goblin running loose in the basement. Leadership may be upstairs celebrating their shiny AI pilot, but downstairs those goblins are chewing wires and rearranging folders. Let Copilot loose down there, and you’ve just handed them megaphones.

The takeaway is simple: bad data doesn’t blow up your deployment in one dramatic crash. It just sandpapers every interaction until user trust wears down completely. One bad answer becomes two. Then the whispers start: “It’s not accurate.” Soon nobody bothers to try it at all.

So the hidden first step isn’t licensing or training—it’s hunting the goblins. Scrub a small set of records. Enforce some structure. Prove the tool works with clean inputs before scaling out. Skip that, and yes—your rollout fails before Day One.

But there’s another side to this problem worth calling out. Even if the data is ready, users won’t lean in unless they actually *want* to. Which raises the harder question: why would someone ask for Copilot at all, instead of just ignoring it?

How Organizations Got People to *Want* Copilot

What flipped the script for some organizations was simple: they got people to *want* Copilot, not just tolerate it. And that’s rare in IT land. Normally, when we push out a new tool, it sits in the toolbar like an unwanted app nobody asked for. But when users see immediate value—actual time back in their day—they stop ignoring it and start asking managers why their department doesn’t have it yet.

Here’s the key difference: tolerated tools just live on the desktop collecting dust, opened only when the boss says, “use it.” Demanded tools show up in hallway chatter—“Hey, this just saved me an hour.” That shift comes from visible wins. Not theory—practical things people can measure. For example: cutting monthly report prep from eight hours to two, automating status updates so approvals close a full day faster, or reducing those reconciliation errors that make finance teams want to chuck laptops out the window. Those are the kind of wins that turn curiosity into real appetite.

Too many IT rollouts assume adoption works by decree. Licensing gets assigned, the comms team sends a cheerful Monday email, and someone hopes excitement spreads. It doesn’t. Users don’t care about strategy decks; they care if their Friday night is saved because they didn’t have to chase through thirty spreadsheets. Miss that, and Copilot gets ghosted before it has a chance.

The opposite shows up in real deployments that created demand. I saw a finance firm run a small, focused Copilot pilot in one department. A handful of analysts went from drowning in Excel tabs to handing off half that grunt work to Copilot. Reports went out cleaner. Backlogs shrank. And the best part—word leaked beyond the pilot group. Staff in other departments started pressing managers with, “Why do they get this and we don’t?” Suddenly IT wasn’t pushing adoption—it was refereeing a line at the door. And if you want the playbook, here’s how they did it: six analysts, a three-week pilot, live spreadsheets, and a daily feedback loop. Tight scope, rapid iteration, visible gains.

That’s the cafeteria effect: nobody cares about lukewarm mystery meat, but bring in a taco bar and suddenly there’s a line. And it sticks—because demand is driven by proof of value, not by another corporate comms blast. Want the pilot checklist to start your own “taco bar”? Grab it at m365.show.

Here’s what the smart teams leaned on. First, they used champions inside the business—not IT staff—to show real stories like “this saved me an hour this morning.” Second, they picked wins others could see: reports delivered early, approvals unclogged, prep time cut in half. Third, they let the proof spread socially. Word of mouth across Teams chats and roundtables hit harder than any glossy announcement ever could. It wasn’t about marketing—it was about letting peer proof build credibility.

That’s why people began asking for Copilot. Because suddenly it wasn’t one more login screen—it was the thing saving them from another tedious data grind. Organizations that made those wins visible flipped the whole posture. Instead of IT nagging people to “adopt,” users were pulling Copilot into their daily flow like oxygen. That’s adoption with teeth—momentum you don’t have to manufacture.

Of course, showing the wins is one thing; structuring the rollout so it doesn’t feel like a sales pitch is another. And that’s where the right frameworks came into play.

The Frameworks That Didn’t Sound Like Sales Pitches

You ever sat through change management slides and thought, “Wow, this feels like an MBA group project”? Same here. AI rollouts should be simple: show users what the tool does, prep them to try it, and back them up when they get stuck. Instead, we get decks with a hundred arrows, concentric circles, and more buzzwords than a product rename week at Microsoft. That noise might impress a VP, but it doesn’t help the people actually grinding through spreadsheets. The ones that work are frameworks stripped down, pointed at real pain points, and built short enough that employees don’t tune out.

ADKAR was one of the few that translated cleanly into practice. On paper it’s Awareness, Desire, Knowledge, Ability, Reinforcement. In Copilot world, here’s what that means: Awareness comes from targeted demos that actually show what Copilot can do for their role—not a glossy video about the “future of productivity.” Desire means proving payoff right away, like showing them how a task they hate takes half the time. Knowledge has to be microlearning, not death-by-deck. Give them five-minute checklists, cheat sheets, or tooltips. Ability comes from sandboxing, letting users practice with fake data or non-critical work so they don’t feel like one wrong click could tank a project. Reinforcement isn’t another corporate memo—it’s templates, shortcuts, or a manager giving recognition when someone pulls it off.

Stripped of its acronym armor, ADKAR isn’t theory at all. It’s a roadmap that says: tell them what it is, why it improves their day, how to use it, let them practice without fear, then keep rewarding its use. The checkpoint here is simple: before you roll out, make sure you can point to at least two real tasks where Copilot improves results by a clear percentage. You set the number—10%, 20%, doesn’t matter. If you can’t prove it in the pilot, the framework just collapses into posters.

I saw this land well with a mid-sized company rolling Copilot into sales ops. They didn’t dump it out on a Monday with a “good luck everyone” email. Instead, they ran tight demo sessions, picked one real task—pipeline reporting—and set up a sandbox space. Analysts tested on sample accounts that couldn’t break anything. IT tracked how long those reports used to take and measured the drop. Leadership capped it by reinforcing good use with templates, so the habits stuck. By the time rollout hit, nobody was scared of the tool—it was already part of their workflow.

Kotter’s “short wins” approach also worked in the trenches. It’s the antidote to year-long change sprints where nobody sees value until they’re out of patience. The model banks on early, visible victories that spread faster than any glossy campaign. In Copilot terms, think of it as shipping a one-week win: one team cuts a weekly report from four hours to one. Or a project lead ditches endless status emails because Copilot already built the summary. Those quick deliveries aren’t fluff—they spread by word of mouth. And when skeptics hear colleagues brag about time back on the calendar, resistance softens. People stop rolling their eyes at the announcement and start repeating the stories themselves.

The trick here isn’t picking *the* right branded model. It’s picking a simple framework and weaponizing it against daily friction. That means short cycles, visible impact, and no over-engineering. Don’t drop a wall of phases or pretend users care what “stage” they’re in. They don’t. Show them the part where their work sucks less, and then back it with a structure that feels natural. That’s when frameworks become leverage instead of wallpaper.

Over time, we noticed the difference. Adoption started sounding more like coworkers swapping success stories and less like executives reading PowerPoint notes. That’s the goal: not buzzwords, not laminated diagrams, just frameworks bent around the reality of users’ pain points. Keep your model simple, keep it human, and you get momentum that feels organic instead of forced.

Of course, all of that only matters if the training connects. A framework on paper dies the moment people check out in the rollout room. And trust me—we’ve all seen what happens when users get trapped in an all-day Copilot training session. By mid-morning, half the room is already buried in their inbox, waiting it out.

Training Without the Eye Rolls

Training is where most rollouts get awkward. Everyone knows it’s important, but done wrong it turns into the moment users decide whether to actually try Copilot or quietly ignore it. This section is about stripping training back to what works—giving people enough hands-on proof without exhausting them in the process.

Let’s start with the big trap: canned demos. Those picture-perfect examples where Copilot autowrites a flawless report or pulls up a contract from thin air. They look good, but they nuke trust fast, because real-world users don’t live in polished demo land. They live in “Budget_2022_FINAL_v13” files and email chains with subject lines like “Re: Re: Fwd: URGENT PLEASE.” Canned demos build unrealistic expectations. Train on their dirty real work instead. Practice example: bring one messy spreadsheet into the room and ask Copilot to summarize the issues. Repeat until the summary is consistently useful. That’s training users can believe.

And here’s the kicker—when training switches to real data, even if Copilot stumbles, people lean in. One company I worked with dropped their shiny sample decks and instead pulled up actual sales pipeline emails and backlog spreadsheets. Employees saw Copilot struggle, adapt, and shave a few steps off the grind they already hated dealing with. It wasn’t magic, but it was honest. Suddenly the room wasn’t bored or cynical—they were curious. That shift matters more than a slick “look what it *could* do” example.

Framing also matters. Trainers who opened with hype lines lost credibility the moment Copilot gave an awkward answer. The fix is dead-simple. Use this exact sentence at the start of every session: “This will get weird sometimes—here’s how you spot and fix it.” That one line resets expectations. Attendees stop waiting for perfection and start poking for usefulness. Messy output? Not failure—just part of the learning curve. And that mindset turns bad drafts into learning moments, not rejection points.

If you want this to stick, think of training less like a one-off workshop and more like rolling out cheat codes for everyday work. Long conference-room marathons kill momentum. Short and frequent sessions create it. A handful of concrete moves work best:

Use real files in training.

Set expectations up front.

Run 15–30 minute micro-sessions instead of full-day slogs.

Create a sandbox stocked with noisy, broken data so people can test without fear.

Hand out a “how to fact-check Copilot” cheat sheet.

That way, training isn’t about pretending Copilot is a wizard. It’s about showing how to use it as a sidekick without making a fool of yourself in front of your boss. The checklist format also means people remember the points after they leave the room. Nobody’s quoting a three-hour slide deck later—but they’ll keep a one-page cheat sheet taped above their monitor.

What comes out of these changes is a shift in attitude. Instead of sitting through long demos, users start testing tasks that actually save them pain. Things like cutting email drafts in half, auto-summarizing project updates, or digging answers out of a spreadsheet tower. Every small win feels like proof that Copilot belongs in their workflow—not just in Microsoft marketing slides. Stack a few of those wins, and adoption stops being about IT nagging people. It becomes something users want to keep refining.

Experience shows this pattern is what drives adoption curves up. Teams who train with honest examples and short cycles walk away saying, “This thing actually helps.” Teams who stage marketing shows walk away complaining. It’s that simple. Curious users become exploring users. Exploring users become daily users. And daily users build the stories that spread faster than internal comms ever could.

Want the worksheets to run your own no-bull training? Subscribe to the TL;DR newsletter at m365.show or follow the M365.Show page for livestreams where MVPs unpack this deeper. It saves you guessing what works or re-inventing the wheel every project.

So that’s training without the eye rolls: real files, realistic framing, and repeatable, small sessions that show value early. If you nail that, Copilot feels like part of the team rather than another IT stunt. And when you frame training right, you set users up with confidence. But ignore it, and the rollout story changes fast—because nothing derails faster than flipping the switch without preparing the ground first.

What Happens When You Skip the Hard Part

Here’s where it gets ugly: when companies skip the hard part and just slam the Copilot switch. No cleanup, no prep, no roadmap—just licenses turned on across the tenant like it’s free donut Friday. By the time someone posts the celebratory Teams message, users are already hammering Copilot with everything from “summarize Q4 financials” to “tell me our vacation policy.” What comes back? Junk. Old junk. Misleading junk. Suddenly Copilot looks less like a helpful assistant and more like a prank bot IT slipped in for laughs.

Why does this happen? Because all the boring groundwork got skipped. No triage of ancient document libraries. No test queries to catch obvious errors. No micro-training to explain “Copilot drafts, you fact-check.” Instead, you’ve got SharePoint folders acting like a company time machine to 2007, stale Excel files surfacing as if they’re gospel, and Copilot happily serving them up as fresh insight. From the user’s seat it feels like dumpster diving in a tuxedo. From IT’s seat, it feels like open season on the help desk.

I saw one rollout go completely sideways when leadership insisted on launching directly to execs. First query out of the CFO’s mouth: “Summarize our Q4 financials.” Instead of the current numbers, Copilot grabbed an archive from three years ago, pre-merger, with a corporate structure that hadn’t existed since 2017. It looked authoritative, it was totally wrong, and it landed live in a boardroom. Try explaining that one.

That’s how trust dies. Fast. A single bad answer on a high-stakes question, and suddenly the whole rollout is tagged a gimmick. Adoption flatlines, Teams chats fill with memes instead of wins, and IT gets peppered with angry calls. If you’re in that spot, here’s the mitigation script you can drop to leadership without hand-waving: “We paused the rollout to fix the data sources causing unreliable answers; we’ll relaunch in phases with measurable wins.” That flips the narrative from “IT messed up” to “IT took control.”

Recovery isn’t pretty, but it’s possible. It comes down to a blunt four-step playbook:

Pause the broad rollout.

Triage and archive obvious stale sources.

Relaunch to a small pilot with clear expectations.

Communicate transparently and collect user feedback.

Those four lines are the difference between quietly fixing the mess and leaving a lasting black eye on Copilot’s reputation.

Inside that playbook, IT ran a few core cleanup moves: clean critical data stores, label what’s active versus archive, limit Copilot scope to one controlled group, and then restart with a sandbox approach. The goal isn’t perfection; it’s containment. You don’t want Copilot crawling dusty archives and spitting them back like gospel—you want it working inside a cleaned-up lane where it can actually build trust.

The communication shift matters just as much. Smart teams didn’t overhype. They set expectations clear up front: “Copilot generates drafts; you fact-check.” “Here’s how to spot stale info.” “Here’s where to send bad outputs.” That kind of honesty landed harder than any executive cheerleading, because it gave users simple guardrails. Nobody expects a draft generator to be flawless. But they do expect it to be predictable, and that predictability rebuilds confidence.

Once those pilot cycles produced small but visible wins—a project lead cutting status update time by half, a manager finally seeing a 40-page planning doc reduced to something coherent—word began to spread. Not through newsletters or roadshows, but through hallway chatter and Teams threads. Instead of CIO slogans, adoption was fueled by coworkers saying, “It actually shaved hours off this thing I hate.” That’s way more powerful than any staged success story.

The takeaway: skipping the hard part on day one buys you a flashy launch and an instant crater. Doing the cleanup and rebooting buys you slower starts but lasting credibility. And credibility is the only thing that puts Copilot into daily workflow instead of the corporate toy drawer.

Because at the end of the day, it’s not the AI that decides whether this thing succeeds. It’s whether the humans on the ground trust it enough to keep using it. And that one factor is the real marker of success.

Conclusion

So here’s the wrap: the difference between a Copilot rollout that fizzles and one that sticks isn’t magic—it’s execution. Forget the marketing noise and remember the ten actions that actually work: scope your sources, clean the worst folder first, pilot small, pick champions, show visible wins, train with real data, set correct expectations, create safe sandboxes, measure and share results, and have a rollback plan.

Do those, and the rollout feels intentional instead of chaotic. Subscribe to the newsletter at m365.show and follow M365.Show LinkedIn for MVP livestreams that go deeper. You can’t exterminate every data goblin, but you can cage the worst ones and let Copilot do real work - Simply by a click on the subscribe button!

Discussion about this episode

User's avatar