Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.
Hi,
I have some questions about the Fabric Capacity Metrics App:
I would like to use the Fabric Capacity Metrics App to compare the CU (s) usage of various possible Power Query (M) scripts for the same (import mode) semantic model.
So I created some different versions of the same semantic model, each with a different Power Query (M) script to load data into the semantic model, and now I want to look into for each of the different versions of the semantic model, how many CU (s) the refresh operation is using.
This also got me interested in trying to understand the Fabric Capacity Metrics App better.
Thank you! 😀
Thank you @v-cboorla-msft !
I will look into Phil's tool. I have tried using SQL Server Profiler, and I liked it.
I haven't used Log Analytics yet, mainly because it isn't free... But I will consider using it.
Is it fair to say that Log Analytics is similar to SQL Server Profiler, but the major difference is that Log Analytics runs unattended + it preserves the log history of all semantic model trace events in the Fabric workspace?
While SQL Server Profiler needs human supervision (you need to point it to a semantic model and then click start trace / stop trace) and also it doesn't keep the history (unless you export the trace file manually).
My reason for using the Fabric Capacity Metrics App to monitor the efficiency of the semantic model refresh operations, is that I am most concerned about optimizing Fabric capacity CU (s) usage (not necessarily refresh duration).
My primary aim is to minimize the CU (s) consumption, so that my Fabric capacity will not reach it's capacity limit.
I cannot find Fabric CU (s) in SQL Server Profiler or Log Analytics, right?
Is there an attribute in SQL Server Profiler or Log Analytics which corresponds to CU (s)?
E.g. I am thinking of the CPU time. Can the CPU time be "translated" into Fabric CU (s)?
I would also appreciate if you would elaborate on this "If you are trying to eliminate as many variables as possible to compare multiple models, I wouldn't suggest using the capacity metrics app."
Are there (external) variables which affect the CU (s) usage of a semantic model refresh operation?
Will the CU (s) usage of refreshing a specific semantic model vary between each time I refresh the semantic model?
(Let's assume I am refreshing an import mode semantic model several times a day, and also let's assume there hasn't been any changes to the data in the data source, so each refresh will have to import exactly the same data)
Thank you 😀
Hi @frithjof_v
Apologies for the delay in response.
Internal team has responded as follow :
Linking CU to what you can get in Azure Monitor is difficult because of a) use of various multipliers in metrics reporting b) CUs are consumed by more than just dataset refreshes. Entry point for the details how Azure Monitor: Log Analytics integration with Power BI and Excel - Azure Monitor | Microsoft Learn
I hope this information helps.
Thank you.
Hi @frithjof_v
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet.
In case if you have any resolution please do share that same with the community as it can be helpful to others.
Otherwise, will respond back with the more details and we will try to help.
Thanks.
Hi @frithjof_v
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. In case if you have any resolution please do share that same with the community as it can be helpful to others .
If you have any question relating to the current thread, please do let us know and we will try out best to help you.
In case if you have any other question on a different issue, we request you to open a new thread.
Thanks.
Hi @frithjof_v
At this time, we are reaching out to the internal team to get some help on this .
We will update you once we hear back from them.
Appreciate your patience.
Thanks.
Hi @frithjof_v
Got an update from our internal team and they replied as follows:
All CU consumption is captured. There is grace period for overloads you can read up on. Latency can fluctuate, but I've never seen it go under 5 min. The model is not open yet officially for extensibility.
If you are trying to eliminate as many variables as possible to compare multiple models, I wouldn't suggest using the capacity metrics app. I'd recommend installing log analytics (the analysis services diagnostic logs) or use Phil's tool. He published an article on how to do the same in real-time.
Hope this helps. Please let me know if you have any further questions.
Thank you.
Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.
Check out the April 2024 Fabric update to learn about new features.