Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Did you hear? There's a new SQL AI Developer certification (DP-800). Start preparing now and be one of the first to get certified. Register now

Reply
Jayden1029
New Member

Questions on Graph Query Reliability, Data Types, and Reasoning Models in Fabric IQ

First, thank you to the Fabric team for continuing to invest in Fabric IQ, Ontology, and agent‑based experiences. 🙂

 

As a user actively experimenting with Fabric IQ Ontology and graph‑backed reasoning in real analytical workflows, I wanted to share a few observations and ask about roadmap considerations that are becoming gating factors for production adoption.
These questions are not about new features, but about reliability, correctness, and maturity of the existing core capabilities.

 

 

1. Graph Query Stability as a Prerequisite for Adoption
 
Today, the largest challenge we encounter is **graph query and NL‑to‑graph stability**.

 

In practice, we frequently see:
- Internal errors without actionable diagnostics 
- Failures that correlate with broader scopes (e.g., longer time ranges or larger entity sets)
- Non‑deterministic behavior where semantically similar questions sometimes succeed and sometimes fail
- Memory or execution failures when traversing multiple relationships or combining traversal with aggregation

 

These issues make it difficult to:
- Trust Ontology‑backed answers in executive‑facing scenarios
- Distinguish between user query issues vs platform limitations
- Build consistent guardrails in agent instructions

 

Before more advanced capabilities (rules, actions, multi‑agent orchestration) can be relied upon, **deterministic execution and debuggable failure modes for graph queries feel essential**.

 

Question:
Are there near‑term efforts specifically focused on stabilizing graph query execution and NL‑to‑graph translation, such as improved error transparency, query planning limits, or scope‑aware safeguards?

 

 

2. Typed Data Support Beyond Strings (Dates, Numeric Values)

 

Another practical limitation today is that Ontology properties are effectively treated as strings.
This creates friction for common analytical and operational questions, especially those involving:
- Dates and time ranges 
- Numeric comparisons, calculation, and threshold
- Chronological reasoning (e.g., “before”, “after”, “within N months”)

 

As a result, users must often offload meaningful logic back to semantic models or external queries, reducing the expressive power of Ontology‑based reasoning.

 

Question: 
Is there a roadmap for supporting **typed properties** (e.g., date, numeric, boolean) within Ontology, so comparisons and reasoning can be performed with semantic correctness rather than string interpretation?

 

Even limited first‑class support for dates and numeric values would significantly improve reliability and reduce ambiguity.

 

 

3. Reasoning Models and Multi‑Step / Multi‑Query Planning

 

Current agent behavior appears optimized around executing a single query per user question. While this works well for simple cases, many real analytical questions require:

 

- Decomposition of a single user question into multiple sub‑questions 
- Sequential or parallel execution of multiple queries 
- Synthesis of results into a coherent answer 

 

Examples include:
- “Which products show increasing failure trends, and which components are most associated with those failures?”
- “What changed year‑over‑year, and how does that relate to specific causal parts?”

 

These are not multiple user questions, but **one analytical intent requiring a plan**.

 

Question:
Are there plans to incorporate stronger reasoning or planning models that can:
- Decompose user intent into multiple steps 
- Execute multiple queries when appropriate 
- Combine Ontology, semantic model, and governed data access into a single synthesized response?

 

This capability seems increasingly important as Fabric IQ moves from Q&A toward genuine analytical assistance.

 

 

Closing Thoughts

 

Fabric IQ is clearly evolving toward a well‑governed, production‑ready agent platform. From a user perspective, the most impactful improvements now appear to be:

 

- Stability and observability over graph queries 
- Semantic correctness through typed data 
- Reasoning depth through multi‑step planning 

 

Addressing these would unlock far more confidence in deploying Ontology‑backed agents to broader audiences.

 

Thank you for the ongoing work, and I’d appreciate any insight into how these considerations fit into the upcoming roadmap.
3 REPLIES 3
lbendlin
Super User
Super User

**deterministic execution and debuggable failure modes for graph queries feel essential**

AI tools and agents are by their very nature probabilistic. If you want deterministic behavior you will want to consider a dedicated app.

I agree that AI systems are inherently probabilistic, and I’m not expecting an agent to always return the exact same answer phrasing or reasoning path.

What I’m referring to by “deterministic execution and debuggable failure modes” is slightly different.
Today, one of the most frequent issues we see is repeated NL‑to‑Ontology / NL‑to‑Graph execution failures (e.g., generic internal errors) where:

  • The platform does not surface what failed (translation, planning, execution, memory, scope)
  • There is no actionable signal to distinguish user query ambiguity from system limitation
  • As a result, it’s impossible to debug, guardrail, or systematically improve agent behavior

Probabilistic reasoning is fine.
Opaque, non‑explainable execution failures are the blocker.

Even probabilistic systems still benefit from deterministic failure semantics (clear error categories, scope limits, planning feedback), especially if Ontology and graph reasoning are expected to support production scenarios.

Clarifying that distinction would go a long way toward making these capabilities operationally trustworthy.

Great points.  What I see in (our) reality is that this collides with both privacy and storage considerations. While many of the agents now expose their reasoning steps to the individual users  there is a substantial resistance to a generic system wide auditing tool for queries, success/failure indicators, and other telemetry that would be useful to improve their performance.

Helpful resources

Announcements
April Fabric Update Carousel

Fabric Monthly Update - April 2026

Check out the April 2026 Fabric update to learn about new features.

Fabric SQL PBI Data Days

Data Days 2026 coming soon!

Sign up to receive a private message when registration opens and key events begin.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Top Solution Authors
Top Kudoed Authors