Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreWe've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
Solved! Go to Solution.
Hey @v-poulamim ,
The confusion usually happens because the Capacity Metrics app does not simply use a basic “total CU consumed divided by total CU available” formula over your selected time range. The Average Utilization % shown in the app is calculated over time intervals and then averaged, not calculated as one single aggregate ratio.
In Fabric / Power BI Premium capacity, utilization is measured in short time slices (for example, per minute). For each time slice, the system calculates how much of the base capacity CU (Capacity Units) was consumed compared to what was available in that slice. Importantly, this calculation is based only on the base capacity and does not include autoscale units.
Conceptually, the calculation works like this: for each time interval, Utilization % = (Consumed CUs during that interval ÷ Base capacity CUs available in that interval) × 100. Then, the “Average utilization %” shown in the app is the average of those interval-level percentages across the selected time window. It is not calculated as (Total CU consumed in the whole period ÷ (Capacity × Total duration)) × 100 in one single step.
That is why your compact formula can produce a different number. Your formula effectively treats the whole time range as one big block and divides total consumption by total theoretical capacity. The Capacity Metrics app instead averages utilization over many smaller time buckets. If there were spikes or idle periods, the two methods will not match exactly.
Also, the app may exclude certain background/system operations from specific visual calculations depending on the metric page you are viewing, which can further create slight differences when compared to manual CU totals.
In short, the Average Utilization % in the Capacity Metrics app is a time-weighted average of per-interval utilization based on base capacity units only, not a single aggregated ratio over the entire selected duration. That difference in aggregation logic is the main reason your manual calculation does not align exactly with the app’s displayed value.
If this explanation helped, please mark it as the solution so others can find it easily.
If it helped, a quick Kudos is always appreciated it highlights useful answers for the community.
Thanks for being part of the discussion!
Hey @v-poulamim ,
The confusion usually happens because the Capacity Metrics app does not simply use a basic “total CU consumed divided by total CU available” formula over your selected time range. The Average Utilization % shown in the app is calculated over time intervals and then averaged, not calculated as one single aggregate ratio.
In Fabric / Power BI Premium capacity, utilization is measured in short time slices (for example, per minute). For each time slice, the system calculates how much of the base capacity CU (Capacity Units) was consumed compared to what was available in that slice. Importantly, this calculation is based only on the base capacity and does not include autoscale units.
Conceptually, the calculation works like this: for each time interval, Utilization % = (Consumed CUs during that interval ÷ Base capacity CUs available in that interval) × 100. Then, the “Average utilization %” shown in the app is the average of those interval-level percentages across the selected time window. It is not calculated as (Total CU consumed in the whole period ÷ (Capacity × Total duration)) × 100 in one single step.
That is why your compact formula can produce a different number. Your formula effectively treats the whole time range as one big block and divides total consumption by total theoretical capacity. The Capacity Metrics app instead averages utilization over many smaller time buckets. If there were spikes or idle periods, the two methods will not match exactly.
Also, the app may exclude certain background/system operations from specific visual calculations depending on the metric page you are viewing, which can further create slight differences when compared to manual CU totals.
In short, the Average Utilization % in the Capacity Metrics app is a time-weighted average of per-interval utilization based on base capacity units only, not a single aggregated ratio over the entire selected duration. That difference in aggregation logic is the main reason your manual calculation does not align exactly with the app’s displayed value.
If this explanation helped, please mark it as the solution so others can find it easily.
If it helped, a quick Kudos is always appreciated it highlights useful answers for the community.
Thanks for being part of the discussion!
Thank you so much for the detailed response. This really helps in understanding how Average utilization% is calculated in Metrics app. Really appreciate your help here!
Hi @v-poulamim
Unfortunately, working out the average utilization can be quite complex because you have interactive processes and background processes which are measured differently. So what I would recommend doing is digging into the metrics app and using the time point details to understand what is happening on your capacity. Or if you have an example use case that we you have we could work through it and try and find out the answer for you.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 29 | |
| 23 | |
| 18 | |
| 17 | |
| 14 |