This time we’re going bigger than ever. Fabric, Power BI, SQL, AI and more. We're covering it all. You won't want to miss it.
Learn moreDid you hear? There's a new SQL AI Developer certification (DP-800). Start preparing now and be one of the first to get certified. Register now
**deterministic execution and debuggable failure modes for graph queries feel essential**
AI tools and agents are by their very nature probabilistic. If you want deterministic behavior you will want to consider a dedicated app.
I agree that AI systems are inherently probabilistic, and I’m not expecting an agent to always return the exact same answer phrasing or reasoning path.
What I’m referring to by “deterministic execution and debuggable failure modes” is slightly different.
Today, one of the most frequent issues we see is repeated NL‑to‑Ontology / NL‑to‑Graph execution failures (e.g., generic internal errors) where:
Probabilistic reasoning is fine.
Opaque, non‑explainable execution failures are the blocker.
Even probabilistic systems still benefit from deterministic failure semantics (clear error categories, scope limits, planning feedback), especially if Ontology and graph reasoning are expected to support production scenarios.
Clarifying that distinction would go a long way toward making these capabilities operationally trustworthy.
Great points. What I see in (our) reality is that this collides with both privacy and storage considerations. While many of the agents now expose their reasoning steps to the individual users there is a substantial resistance to a generic system wide auditing tool for queries, success/failure indicators, and other telemetry that would be useful to improve their performance.
Check out the April 2026 Fabric update to learn about new features.
Sign up to receive a private message when registration opens and key events begin.