Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
fabricuser123_
New Member

Validation/optimisation options for AI features

Hello 

 

Are there any ways to validate or optimise the output of the Fabric AI models (sentiment analysis, classification etc)? We have noticed that the models sometimes output erroneous data - e.g. labelling a text as 'positive' instead of 'negative' - and we would like to make sure that the data sent to users is as accurate as possible.

 

Are there any upcoming plans for this in the roadmap, or is it just expected behaviour that we have take into consideration? If there are any workaround we can try in the meantime, please let me know!

 

Thanks

3 REPLIES 3
arabalca
Resolver I
Resolver I

Hi @fabricuser123_ ,

 

First thing to assume: AI Functions results will never be 100% deterministic. Microsoft's own documentation includes the warning # This code uses AI. Always review output for mistakes in every code example — that's not decorative, it's a real acknowledgment of expected model behavior.

That said, there are two approaches depending on what you need:

Scenario A — Validate before trusting results

Microsoft has published evaluation notebooks that measure output quality using LLM-as-a-Judge: a larger model acts as evaluator and computes accuracy, precision, recall and F1 metrics. The workflow is: run the function with an executor model, evaluate with a judge model, identify which predictions need review, and refine labels or configuration. ( https://blog.fabric.microsoft.com/en-us/blog/unlock-insights-from-images-and-pdfs-with-multimodal-su... )

Use this before going to production to establish a quality baseline on your own dataset.

Scenario B — Control quality directly in production

A more robust pattern is adding a Data Quality layer after the AI Function:

  • Layer 1: the AI Function generates the label → sentiment_ai column
  • Layer 2: a DQ rule flags uncertain cases → needs_review = true when the model returns mixed, the text is very short, or contains negations/irony

There is a documented pattern where, if a second judge model disagrees with the assigned label, the pipeline conditionally triggers a relabeling step. This loop is orchestrated using Fabric Pipelines, which support conditional and iterative control flow. (https://learn.microsoft.com/en-us/fabric/data-science/tutorial-text-classification)

The human factor doesn't disappear — it gets focused on flagged cases, not 100% of the dataset. That's what makes the process sustainable at scale.

 

If this response was helpful, please give it a like and mark it as a solution — it encourages me to keep contributing and helps other users find answers faster 🙌

v-veshwara-msft
Community Support
Community Support

Hi @fabricuser123_ ,

Thanks for the question.

To add to @lbendlin 's response, Fabric currently does not provide built-in mechanisms to explicitly validate or fine-tune the output of these AI features. Any accuracy checks or corrections would need to be implemented at the solution level, such as applying post-processing rules, handling known edge cases, or introducing review mechanisms for critical scenarios.

 

Also, there is no publicly documented roadmap at this time specifically covering enhancements for validation or optimization of these features, so it is recommended to design downstream logic with this behavior in mind and monitor the Fabric roadmap for updates.

 

Hope this helps. Please reach out for further assistance.
Thank you.

lbendlin
Super User
Super User

and we would like to make sure that the data sent to users is as accurate as possible.

Use the right tool for the task. A ML approach will yield much more reliable outcomes than a AI model (which is stochastic by design)

Helpful resources

Announcements
New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Solution Authors