Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
frithjof_v
Continued Contributor
Continued Contributor

Fabric Capacity Metrics App

Hi,

 

I have some questions about the Fabric Capacity Metrics App:

 

  • Do I need to refresh the semantic model for the Fabric Capacity Metrics App?
    • Or I just need to refresh the visuals in the Fabric Capacity Metrics App report?
  • I have a workspace with 5 semantic models. Each of the semantic models are refreshed on a schedule, 20 times each day.
    In the first couple of days, I could only find 1 of the semantic models in the Fabric Capacity Metrics App. It seems this was the 1 semantic model with clearly highest CU usage. I am confused about why I could see 1 of the 5 semantic models in the Fabric Capacity Metrics App.
    Is it something like a lower limit of how much CU (s) an item must use, before the item will be included in the Fabric Capacity Metrics App? 
  • Is the Fabric Capacity Metrics App semantic model a DirectQuery semantic model which is connected to a Kusto database as it's data source?
  • Is the Fabric Capacity Metrics App using the credentials of the user who is the semantic model owner in order to determine which data to display in the report, or is it using the credentials of the logged-in report reader (end user)?
  • What is the latency for the Fabric Capacity Metrics App?
    E.g. after I start a refresh operation for an import mode semantic model in Fabric, how long will it take before the CU usage for this operation appears in the Kusto database (which is the DirectQuery data source of the Fabric Capacity Metrics App, if I understand correctly)?

 

I would like to use the Fabric Capacity Metrics App to compare the CU (s) usage of various possible Power Query (M) scripts for the same (import mode) semantic model.
So I created some different versions of the same semantic model, each with a different Power Query (M) script to load data into the semantic model, and now I want to look into for each of the different versions of the semantic model, how many CU (s) the refresh operation is using.

 

This also got me interested in trying to understand the Fabric Capacity Metrics App better.

 

Thank you! 😀

6 REPLIES 6
frithjof_v
Continued Contributor
Continued Contributor

Thank you @v-cboorla-msft !

 

I will look into Phil's tool. I have tried using SQL Server Profiler, and I liked it.
I haven't used Log Analytics yet, mainly because it isn't free... But I will consider using it.

 

Is it fair to say that Log Analytics is similar to SQL Server Profiler, but the major difference is that Log Analytics runs unattended + it preserves the log history of all semantic model trace events in the Fabric workspace?

 

While SQL Server Profiler needs human supervision (you need to point it to a semantic model and then click start trace / stop trace) and also it doesn't keep the history (unless you export the trace file manually).

 

 

My reason for using the Fabric Capacity Metrics App to monitor the efficiency of the semantic model refresh operations, is that I am most concerned about optimizing Fabric capacity CU (s) usage (not necessarily refresh duration).

My primary aim is to minimize the CU (s) consumption, so that my Fabric capacity will not reach it's capacity limit.

 

I cannot find Fabric CU (s) in SQL Server Profiler or Log Analytics, right?

Is there an attribute in SQL Server Profiler or Log Analytics which corresponds to CU (s)?
E.g. I am thinking of the CPU time. Can the CPU time be "translated" into Fabric CU (s)?


I would also appreciate if you would elaborate on this "If you are trying to eliminate as many variables as possible to compare multiple models, I wouldn't suggest using the capacity metrics app."
Are there (external) variables which affect the CU (s) usage of a semantic model refresh operation?
Will the CU (s) usage of refreshing a specific semantic model vary between each time I refresh the semantic model?
(Let's assume I am refreshing an import mode semantic model several times a day, and also let's assume there hasn't been any changes to the data in the data source, so each refresh will have to import exactly the same data) 

 

Thank you 😀

Hi @frithjof_v 

 

Apologies for the delay in response.

Internal team has responded as follow :

Linking CU to what you can get in Azure Monitor is difficult because of a) use of various multipliers in metrics reporting b) CUs are consumed by more than just dataset refreshes. Entry point for the details how Azure Monitor: Log Analytics integration with Power BI and Excel - Azure Monitor | Microsoft Learn

 

I hope this information helps. 

 

Thank you.

Hi @frithjof_v 

 

We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet.
In case if you have any resolution please do share that same with the community as it can be helpful to others.
Otherwise, will respond back with the more details and we will try to help.

 

Thanks.

Hi @frithjof_v 


We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. In case if you have any resolution please do share that same with the community as it can be helpful to others .
If you have any question relating to the current thread, please do let us know and we will try out best to help you.
In case if you have any other question on a different issue, we request you to open a new thread.

 

Thanks.

v-cboorla-msft
Community Support
Community Support

Hi @frithjof_v 

 

At this time, we are reaching out to the internal team to get some help on this .
We will update you once we hear back from them.
Appreciate your patience.

 

Thanks.

Hi @frithjof_v 

 

Got an update from our internal team and they replied as follows:

All CU consumption is captured. There is grace period for overloads you can read up on. Latency can fluctuate, but I've never seen it go under 5 min. The model is not open yet officially for extensibility.
If you are trying to eliminate as many variables as possible to compare multiple models, I wouldn't suggest using the capacity metrics app. I'd recommend installing log analytics (the analysis services diagnostic logs) or use Phil's tool. He published an article on how to do the same in real-time.

Hope this helps. Please let me know if you have any further questions.

 

Thank you.

Helpful resources

Announcements
Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.

April Fabric Update Carousel

Fabric Monthly Update - April 2024

Check out the April 2024 Fabric update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.

Top Kudoed Authors