The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello,
I've remarked the following inconsistency regarding the CPU consumption of the dataflows:
In the Health metris the average consumption for the last 7 days is very low:
Meanwhile in the Capacity Metrics for the same capacity the workload is very high, clearly the average should be more than 7%:
The average duration of a dataflow is 2 minutes, on average there are 2 refreshes of a dataflow
Do you know what is the reason of this difference? What maybe causing those spikes?
Hi @Anonymous ,
CPU usage should more related to your custom merge steps, you can try to remove merge steps to confirm if this issue disappears. For reference: Creating and using dataflows in Power BI .
To improve the performance, if you have a Premium license, you could follow this blog to set dataflow container size: https://blog.crossjoin.co.uk/2019/04/21/power-bi-dataflow-container-size/ . If you have a Pro license, you could try the following workarounds:
There are several reasons and scenarios to cause the difference, and you may refer to the articles:.
Monitor capacities in the Admin portal
Best Regards,
Amy
Community Support Team _ Amy
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hello,
Thanks for your ideas.
Please keep in mind the dataflows are executing in 2 minutes. Additionally they are very small and already well-optimized.
All of them are running on the dedicated 12v-core capacity.
The question: is why there is a difference between health metrics and Capacity Metrics App? What causes the inconsistent and which measurer should be trusted?