Can't miss session! The 9 FabCon and SQLCon takeaways the community can't stop talking about. Join us on April 2nd. Register now
Hello
Are there any ways to validate or optimise the output of the Fabric AI models (sentiment analysis, classification etc)? We have noticed that the models sometimes output erroneous data - e.g. labelling a text as 'positive' instead of 'negative' - and we would like to make sure that the data sent to users is as accurate as possible.
Are there any upcoming plans for this in the roadmap, or is it just expected behaviour that we have take into consideration? If there are any workaround we can try in the meantime, please let me know!
Thanks
and we would like to make sure that the data sent to users is as accurate as possible.
Use the right tool for the task. A ML approach will yield much more reliable outcomes than a AI model (which is stochastic by design)