Great question — and I think it’s less about “trusting AI more” or “relying on it less” and more about learning to work with it differently.
With Agentic AI, the real shift isn’t blind trust or cautious distance, it’s calibrated reliance. Teams will lean on AI agents heavily for execution — coding, testing, monitoring — but they’ll also build in the right checks, rules, and human oversight to ensure outcomes stay aligned with goals. In other words, it’s not about choosing between trust and reliance, but about shaping responsible reliance.
At M365, we believe in using AI every day in our own work. We’re not just observers of the change — we actively show how we use AI in practice: m365.show/about
. What we see is that once people experience AI as a dependable partner rather than a black box, confidence grows naturally.
So the real shift ahead? Not whether people trust AI more or rely on it less, but whether organizations learn how to design the right balance of autonomy, oversight, and accountability in their workflows. That’s what unlocks the productivity and the trust.
Great take here, do you think the real shift ahead is about people trusting AI more or relying on it less?
Great question — and I think it’s less about “trusting AI more” or “relying on it less” and more about learning to work with it differently.
With Agentic AI, the real shift isn’t blind trust or cautious distance, it’s calibrated reliance. Teams will lean on AI agents heavily for execution — coding, testing, monitoring — but they’ll also build in the right checks, rules, and human oversight to ensure outcomes stay aligned with goals. In other words, it’s not about choosing between trust and reliance, but about shaping responsible reliance.
At M365, we believe in using AI every day in our own work. We’re not just observers of the change — we actively show how we use AI in practice: m365.show/about
. What we see is that once people experience AI as a dependable partner rather than a black box, confidence grows naturally.
So the real shift ahead? Not whether people trust AI more or rely on it less, but whether organizations learn how to design the right balance of autonomy, oversight, and accountability in their workflows. That’s what unlocks the productivity and the trust.