M365 Show -  Microsoft 365 Digital Workplace Daily
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Go Beyond the Demos—Make Copilot Do What You Need in Business Central
0:00
-21:02

Go Beyond the Demos—Make Copilot Do What You Need in Business Central

Ever wish Business Central actually did the boring work for you? Like reconciling payments or drafting product text, instead of burying you in extra clicks and late-night Excel misery? That’s the promise of Copilot. And before you ask—yes, it’s built into Business Central online at no extra cost. Just don’t expect it to run on your on-prem install.

Here’s the catch: most admins never look past the canned demos. Today we’ll strip it down and show you how to make Copilot work for *your* business. By the end, you’ll walk away with a survival checklist you can pressure-test in a sandbox.

And it all starts with the hidden menu Microsoft barely talks about.

The Secret Menu of Copilot

Copilot’s real power isn’t in the flashy buttons you see on a customer card. The real trick is what Microsoft left sitting underneath. You’ll find those extension points inside the `System.AI` namespace — look for the Copilot Capability codeunit and related enums. That’s where the actual hooks for developers live. These aren’t random artifacts in the codebase. They’re built so you can define and register your own AI-powered features instead of waiting for Microsoft to sprinkle out a new demo every quarter.

The menu most people interact with is just the surface. Type invoice data, get a neat summary, maybe draft a product description — fine. But those are demo scenarios to show “look, it works!” In reality, Business Central’s guts contain objects like Copilot Capability and Copilot Availability. In plain English: a Capability is the skill set you’re creating for Copilot. Availability tells the system when and where that skill should show up for end users. Together, that’s not just a menu of canned AI widgets — it’s a framework for making Copilot specific to your company.

Here’s the kicker: most admins assume Copilot is fully locked down, like a shiny black box. They use what’s there, shrug, and move on. They never go looking for the extra controls. But at the developer level, you’ve got levers exposed. And yes, there’s a way for admins to actually see the results of what developers register. Head into the “Copilot & agent capabilities” page inside Business Central. Every capability you register shows up there. Admins can toggle them off one by one if something misbehaves. That connection — devs define it in AL, admins manage it in the UI — is the bridge that makes this more than just theory.

Think of it less like a locked Apple device and more like a console with hidden debug commands. If all you ever do is click the main Copilot button, you’re leaving horsepower on the table. It’s like driving a Tesla and only ever inching forward in traffic. The “Ludicrous Mode” switch exists, but until you flip it, you’re just idling. Same thing here: the namespace objects are already in your tenant, but if you don’t know where to look, you’ll never use them.

So what kind of horsepower are we talking about? The AI module inside Business Central gives you text completions, chat-like completions for workflow scenarios, and embeddings for semantic search. That means you can build a capability that, for example, drafts purchase orders based on your company’s patterns instead of Microsoft’s assumptions. It also means you can create assistants that talk in your company’s voice, not some sterilized HR memo. Quick note before anyone gets ideas: the preview “Chat with Copilot” feature you might have seen in Business Central isn’t extensible through this module. That chat is on its own path. What you *do* extend happens through the Capability model described here.

Microsoft did a poor job of surfacing this in their marketing. Yes, it’s in the docs, but buried in a dry technical section few admins scroll through. But once you know these objects exist, the picture changes. Every finance quirk, every weird custom field, every messy approval workflow — all of it can be addressed with your own Copilot capability. Instead of waiting for Redmond to toss something down from on high, you can tailor the assistant directly to your environment.

Of course, nothing this powerful comes without warning labels. These are sharp tools. Registering capabilities wrong can create conflicts, especially when Microsoft pushes updates. Do it badly, and suddenly the sandbox breaks, or worse, you block users in production. That’s why the Copilot & agent capabilities page matters: not only does it give admins visibility, it gives you a quick kill switch if your custom brain starts misbehaving.

So the payoff here is simple: yes, there’s a secret menu under Copilot, yes, it’s in every tenant already, and yes, it turns Copilot from a demo toy into something useful. But knowing it exists is only step one. The real trick is registering those capabilities safely so you add firepower without burning your environment down — and that’s where we go next.

Registering Without Burning Down Your Tenant

Registering a Copilot capability isn’t some vague wizard trick. In plain AL terms, it means you create an `enumextension` for the `Copilot Capability` enum and then use an `Install` or `Upgrade` codeunit that calls `CopilotCapability.RegisterCapability`. That’s the handshake where you tell Business Central: “Here’s a new AI feature, treat it as part of the system.” Without that call, your extension might compile, but Copilot won’t even know the feature exists. Think of it as submitting HR paperwork: no record in the org chart, no desk, no email, no employee.

Once you’ve got the basic definition in place, the next detail is scope and naming. Every capability you register lives in the same ecosystem Microsoft updates constantly. If you recycle a generic name or reserve a sloppy ID, you’re basically begging for a collision. Say you call it “Sales Helper” and tag it with a common enum value—then Microsoft ships a future update with a built-in capability in the same space. Suddenly the system doesn’t know which one to show, and your code is arguing with Redmond at runtime. The mitigation is boring but essential: pick unique names, assign your own enum values that don’t overlap with the common ranges, and version the whole extension deliberately. Add version numbers so you can track whether sandbox is on 1.2 while production’s still sitting at 1.0. And if something changes with the platform, your upgrade codeunits are the tool to carry the capability forward safely. Without those, you’re duct-taping new wiring into an old breaker box and hoping nothing bursts into flames.

Now here’s where too many developers get casual. They throw the extension straight into production because “it’s just a capability.” That’s when your helpdesk lights up. The right path is simple: sandbox-only first. Break it, refactor, test it again, and only when it behaves do you move to prod. That controlled rollout reduces surprises. And this isn’t just about compiling code—it’s about governance. The Copilot & Agent capabilities page in Business Central doubles as your sanity check. If your capability doesn’t appear there after registration, you didn’t register it properly. That page reflects the system’s truth. Only after you’ve validated it there should you hand it off for admin review. And speaking of admins, flipping Copilot capabilities on or off, as well as configuring data movement, is something only admins with SUPER permissions or a Business Central admin role can do. Plan for that governance step ahead of time.

A quick pro tip: when you register, use the optional `LearnMoreUrlTxt` parameter. That link shows up right there in the Copilot & Agent capabilities admin page. It’s not just a nice touch—it’s documentation baked in. Instead of making admins chase down a wiki link or bother you in Teams, they can click straight into the description of what the capability does and how to use it. Think of it as writing instructions on the actual light switch so the next person doesn’t flip the wrong one.

Here’s a best-practice checklist that trims down the risks: 1) run everything in a sandbox before production, 2) pick unique enum values and avoid common ranges, 3) always use Install/Upgrade codeunits for clean paths forward, 4) attach that LearnMoreUrl so admins aren’t guessing later. Follow those four, and you’ll keep your tenant stable. Ignore them, and you’ll be restoring databases at three in the morning.

The parking space metaphor still applies. Registering a capability is like officially reserving a spot for your new car. Fill out the right paperwork, it’s yours and everyone’s happy. Skip the process or park in the red zone, and now you’re blocking the fire lane and everyone’s angry. Registration is about carving out safe space for your feature so Business Central and Microsoft’s updates can coexist with it longer term.

Bottom line: treat registration like production code, because that’s exactly what it is. Test in sandboxes, keep your scope unique, track your versions, and make your upgrade codeunits airtight. If something weird happens, the Copilot & Agent capabilities page plus your LearnMoreUrl is how admins will find, understand, and if needed, shut down the feature. Done right, registration sets you up for stability. Done sloppy, it sets you up for chaos.

Once you’ve got that locked down, you’ll notice the capability itself is functional but generic. It answers, but without character. That’s like hiring someone brilliant who shows up mute at meetings. The next step is teaching your Copilot how to act—because if you don’t, it’ll sound less like a trusted assistant and more like a teenager rolling their eyes at your questions.

Metaprompts: Teaching Your AI Manners

That leads us straight into metaprompts—the part where you stop leaving Copilot adrift and start giving it rules of engagement. In Microsoft’s own developer docs, a metaprompt is the “primary system message” that defines the model’s profile, output format, and guardrails. Plain English: it’s the AI’s job description. A one-off user prompt is like a single task request, but the metaprompt is standing instructions baked into every response. Without it, Copilot doesn’t know whether it’s supposed to sound like a bookkeeper or a copywriter—it just guesses.

Think of it this way: you wouldn’t hire someone without giving them a role description. The metaprompt is exactly that for the AI. It tells Copilot when to stay formal, how to format results, which tone to use, and what it absolutely must avoid. That makes the difference between answers that fit your workflow versus replies that read like warmed-over Clippy with opinions.

Admins often confuse metaprompts with prompts, which is why they get frustrated. If you just throw “Give me ledger entries” at Copilot without context, it’ll invent its own style—maybe long paragraphs, maybe fields that don’t exist. Wrap that same request in a metaprompt like, “You are a finance assistant. Always output ledger entries as bullet lists using Business Central field names” (example only), and suddenly the answers are clean, structured, and audit-friendly. The guardrail is locked in before the request even runs.

Microsoft built this distinction on purpose. Prompts are temporary requests, while metaprompts stay persistent across the whole chat context. Developers set them through the `AOAI Chat Messages` codeunit by calling `SetPrimarySystemMessage`. Storing the metaprompt itself in `IsolatedStorage` means your extension can retrieve and update it safely, rather than hardcoding text into the AL file. If you’ve skimmed the docs, you’ll see examples using `IsolatedStorage.Set` and `IsolatedStorage.Get`—that’s the right model for production. It keeps your metaprompt available but locked away behind the extension’s security boundary.

Now, into the nuts and bolts. When you send user input to the Azure OpenAI service through Business Central’s AI module, you’re dealing with token budgets and temperature settings whether you like it or not. Token budgets matter: your metaprompt plus the user’s prompt all count toward the model’s token limit, so don’t cram in a novel-length system message. Keep it precise. And temperature controls randomness—set a low temperature (near zero) for finance scenarios where you need deterministic, field-accurate outputs. If you’re building a marketing capability and want more colorful drafts, raise the temperature and let it improvise. This isn’t guesswork; it’s in the build guides.

Another compliance reminder: auditors won’t care about your creativity. They care that the numbers match. Your metaprompt should include explicit formatting rules, response structures, and restrictions on irrelevant commentary. That way, Copilot outputs don’t just look nice to users—they survive downstream systems and audits. Treat the metaprompt as part documentation, part risk control.

The personality split is where this gets fun. Marketing Copilot’s metaprompt might tell it to “use persuasive language and brainstorm product slogans,” while Finance Copilot’s says “output structured journal entries with no adjectives.” Same underlying AI, completely different results—because you defined the personality ahead of time. That’s the difference between practical automation and gimmicky outputs that nobody can actually use.

And even though this section is about manners, you can’t ignore the hard plumbing. Metaprompts only work if the connection to Azure is set up correctly. The AI doesn’t process them locally in Business Central—it passes everything through the Azure OpenAI backend. Mess up that pipeline, and it won’t matter how well you wrote the rules; you’ll still get failures and maybe worse, misfired keys.

Bottom line: prompts are requests, metaprompts are standing orders. Get them right, store them securely, check your token math, and set temperature to match the job. That’s how you turn Copilot from a random chatterbox into a role-specific assistant you can actually trust. Do it sloppy, and you’ll be spending nights editing nonsense out of reports.

But even the best-worded metaprompt won’t save you if your credentials are wide open. And that’s where the real danger lies—not in tone or format, but in how you store and protect the keys that let Copilot talk to Azure in the first place.

Guarding the Keys: Azure Security Done Right

Guarding the keys isn’t glamorous, but it’s where you either keep your tenant safe or end up filing incident reports. When Copilot calls Azure OpenAI, it needs three things every single time: the resource endpoint (that’s your Azure OpenAI resource URL), the deployment name, and the API key. Together, those three are the handshake that proves your tenant is allowed to talk to the model. Leave any one of them exposed, and you’re not doing AI—you’re doing breach simulation.

Let’s be blunt: never, under any circumstance, hardcode those values into AL. If you bake an API key into your code or shove it in GitHub “for convenience,” you’ve basically invited credential scanners to treat your repo as a buffet. Developers may shrug it off as “just testing,” but repos are constantly crawled for exposed secrets. That’s how real customer data ends up on paste sites. You can’t fix that with a patch Tuesday.

The safe path has already been spelled out by Microsoft. For development and test work, IsolatedStorage in AL is your go‑to. Picture it as a locker inside your extension—keys go in, and no other extension can peek at them. It keeps things contained, lightweight, and simple. That’s perfect for sandboxes and dev builds. But don’t fool yourself: IsolatedStorage is not production‑grade. Its local scope makes it handy for iteration, but it isn’t designed to be your vault once real users and real data are on the line.

When you ship to production, the only responsible move is App Key Vault (or Azure Key Vault if you’re deploying broadly). Think of this as renting a bank safe instead of hiding cash under your desk drawer. Key Vault keeps your API keys in a secure, access‑controlled container, with proper logging and rotation policies. You integrate your extension with it, call what you need at runtime, and never expose the raw key in code. That’s the pattern auditors actually like, because it proves the secrets were never floating around in plain text. It’s boring governance, but boring is exactly what you want when compliance teams show up.

Here’s one detail straight from the docs that makes a difference: in AL, you can store the API key as `SecretText`. That way, even if you step through debugging, the key doesn’t pop up in clear text for anyone to screenshot. “SecretText” is exactly what it sounds like—it hides sensitive values at runtime. If you take nothing else away from this section, remember that option exists. It closes off one of the dumbest leak vectors: accidental developer exposure in the debugger.

Now let’s spell out the do’s and don’ts like real admins talk:

  • Don’t embed API keys in AL code or paste them into repos. That’s credential harvesting bait.

  • Do use IsolatedStorage for dev and testing, so you keep your team productive without sending secrets through plain text.

  • Don’t drag that dev pattern into production—it’s not a vault, it’s a filing cabinet.

  • Do wire your production build to Key Vault. It’s the only place those keys belong long‑term.

Another sanity point: user permissions inside Business Central don’t magically lock down API keys. Copilot inherits access control on business data, sure—but keys are about authenticating the whole service. If one gets loose, outsiders can impersonate the whole integration. So stop pretending permission scoping saves you there. It doesn’t.

And if you want to validate your setup, the official route is the `SetAuthorization` sequence in the Azure OpenAI codeunit. That’s where you feed in the resource URL, deployment, and API key to prove everything lines up. Do it wrong, and Copilot won’t even load. Do it half‑right, and you’ll get silent errors that users find before you do. The fix is simple: follow the chain in order, test it cleanly in dev, then hand over to pre‑prod, then production. Three environments, three sets of keys, one pipeline that actually works under load.

So here’s your checklist to stay out of trouble: IsolatedStorage in dev, Key Vault in production, wrap your API key in SecretText, never embed values in code, and always run the `SetAuthorization` chain end‑to‑end before you call the job done. That pattern keeps your Copilot integration alive, secure, and compliant without midnight fire drills.

With the security piece handled, the next challenge is less about where to hide the keys and more about how to make all the parts connect without constant error messages. Because having a capability, a metaprompt, and a secure key doesn’t help much if they sit in silos and refuse to play nice. That’s where wiring the workflow comes in.

From Blueprint to Reality: Wiring the Workflow

So now we’re talking about what turns loose parts into an actual working Copilot flow. You’ve registered a capability, scoped it out, secured the keys, and written the metaprompt—but unless you line them up in the right order, all you get is a shiny button that either errors out or quietly does nothing until users start filing tickets.

Here’s the sequence boiled down to five spoken steps. Step one: Capability. Step two: Availability. Step three: Authorization. Step four: Metaprompt. Step five: Pilot. Remember those five, and you’ll always have the wiring diagram in your head. Capability gives the skill. Availability sets the audience. Authorization wakes the brain. Metaprompt gives manners. Pilot proves it actually works. That’s the whole loop.

Step one, Capability. In AL, that means you extend the `Copilot Capability` enum with an `enumextension`, then register it in an `Install` or `Upgrade` codeunit by calling `CopilotCapability.RegisterCapability`. That’s the official “job posting” that says, here is a new assistant skill, give it a slot in the system. Without that, Business Central never even knows your feature is supposed to exist.

Step two, Availability. Once the skill is defined, you scope where it shows up. That’s controlled through the `Copilot & agent capabilities` admin page. You decide: is it marked as Preview, Internal Only, or Generally Available? Dev thinks of this as metadata, but for admins it’s the toggle switch for rollout. Use it. If you skip this step, you’ll risk exposing half-baked features to every user before they’re stable.

Step three, Authorization. Now you connect the Azure OpenAI backend through the `Azure OpenAI` codeunit. You need the classic triple: Resource Endpoint (your Azure OpenAI URL), Deployment Name, and the API Key. Call `AzureOpenAI.SetAuthorization` with those values. If Capability was the job ad, and Availability the contract, Authorization is the ID badge to get your assistant in the building. Without it, the feature just hangs at the glass door.

Step four, Metaprompt. Here you inject the standing instructions that shape tone, structure, and formatting. Use `AOAIChatMessages.SetPrimarySystemMessage` to fetch the metaprompt you stored safely in `IsolatedStorage` or, in production, referenced from Key Vault. That’s where you say: “You’re a quote assistant” or “You’re a finance helper,” and lock down what output should look like. Without it, the assistant just shrugs and makes it up.

Step five, Pilot. Roll it to a small group first. Give them the capability, capture feedback, and watch telemetry. The AI module logs usage so you can see whether it’s producing clean outputs or spamming junk. And before you expand, you need a rollback plan. The good news: the `Copilot & agent capabilities` page doubles as your ripcord. Admins can deactivate individual features instantly if something starts misbehaving. That same page also shows the “Allow data movement across geographies” toggle if your Azure setup crosses regions. If you see that surface, pay attention—someone in compliance will.

Let’s nail this sequence with a field example. One team built a Sales Order Suggestion Copilot. They started with an `enumextension` for Copilot Capability and tied it into an Install codeunit. They flagged it as Preview in the `Copilot & agent capabilities` page, so it was visible only to test users. Then they called `AzureOpenAI.SetAuthorization` with the endpoint, deployment, and key, so the model would actually respond. Next they wrote a metaprompt into IsolatedStorage and set it with `AOAIChatMessages.SetPrimarySystemMessage`: “You’re a quote assistant. Suggest bundles and format them as item lines, no extra chatter.” In pilot, small groups tested until the outputs were predictable. Only then did they mark Availability as GA, making it live for all sales users. The result: a Copilot that recommended bundles instead of typing fluff.

The governance layer here isn’t optional. If something goes crooked, admins can flip the toggle off before users call in. That breaker-switch mindset is what keeps custom Copilots productive instead of disruptive. Apply it like you would with any extension: staged rollouts, limited scope, measurable testing. AI doesn’t get a free pass.

Run each part in isolation when you test. Don’t wire the whole chain and hope. Validate Capability registration, check that Availability surfaces properly, confirm Authorization with `SetAuthorization`, then confirm the metaprompt sticks in responses. Only once each piece works do you let end users touch it. That’s the blueprint-to-reality bridge.

Put simply: Capability defines the skill, Availability scopes the audience, Authorization opens the door, Metaprompt sets the rules, and Pilot proves the system won’t embarrass you. Get those right, and you’re ahead of the official feature rollouts.

And that brings us back to the bigger point—these steps are what let you shape Copilot around your business, instead of marching along with whatever prepackaged demo Microsoft shows on stage.

Conclusion

So here’s the wrap-up. Three things to keep in your head:

One, extend Copilot through `System.AI` by registering capabilities.

Two, secure the Azure connection—use IsolatedStorage while you’re testing but move those keys to Key Vault in production.

Three, pilot and govern features from the Copilot & agent capabilities page so admins always keep the kill switch in hand.

One quick reminder for the governance folks: Copilot inherits user permissions, so it never reads what the user can’t access. And Microsoft isn’t training models on your tenant’s data unless you explicitly allow it.

Next step—try one capability in your sandbox this week and see it behave correctly in that checklist order. Then subscribe to the newsletter at m365.show for survival guide updates, and follow the M365.Show LinkedIn page to catch livestreams with MVPs who’ve already broken this stuff (and fixed it).

Discussion about this episode

User's avatar