Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hey everyone,
I'm currently using the Fabric platform and have implemented Power BI embedding in our production environment. We're using the F32 SKU capacity. So far, our users aren't experiencing any issues. However, when I reviewed the capacity metrics, I noticed something unusual.
Even with just 2–3 users, the background processing CU usage remains low, but the interaction usage is quite high — frequently exceeding 100%. I'm trying to understand how this works. Is it normal for interaction usage to go beyond 100%?
I’m aware that our semantic model is complex and requires more processing during user interactions. We're actively working on optimizing it. Still, I'm curious: is it expected behavior for interaction CU to spike over 100%, even during single-user testing?
Any insights or guidance would be appreciated.
Thanks!
Solved! Go to Solution.
Hi @Milan1756
Thank you for being part of the Microsoft Fabric Community.
Yes, it’s normal for interactive CU usage to spike over 100%, even with just 1–3 users — especially if your semantic model is complex.
In Fabric (e.g., F32 SKU), CU metrics reflect total CPU load. Spikes above 100% mean your report briefly consumed more than a single core’s capacity, which can happen with expensive DAX, high cardinality, or many concurrent visuals.
If users aren't experiencing delays or autoscale events, you're still within healthy limits.
Since you're optimizing, consider:
Performance Analyzer for visual bottlenecks
VertiPaq Analyzer to inspect model size/cardinality
Reducing visuals and slicers per page
Keep monitoring this behavior is expected unless it becomes sustained or impacts performance.
If the above information is helpful, please give us Kudos and mark the response as Accepted as solution.
Best Regards,
Community Support Team _ C Srikanth.
Hey Team,
Thank you so much for the information—it was really helpful!
I have a follow-up question: the Fabric capacity metrics seem to show only the consumption done by dashboards. What about dataflows and notebooks running on Fabric? How can I determine the cost or capacity usage associated with each individual dataflow or combined dataflows?
Looking forward to your guidance.
Best regards,
Milan
Hi @Milan1756
Thank you for being part of the Microsoft Fabric Community.
Yes, it’s normal for interactive CU usage to spike over 100%, even with just 1–3 users — especially if your semantic model is complex.
In Fabric (e.g., F32 SKU), CU metrics reflect total CPU load. Spikes above 100% mean your report briefly consumed more than a single core’s capacity, which can happen with expensive DAX, high cardinality, or many concurrent visuals.
If users aren't experiencing delays or autoscale events, you're still within healthy limits.
Since you're optimizing, consider:
Performance Analyzer for visual bottlenecks
VertiPaq Analyzer to inspect model size/cardinality
Reducing visuals and slicers per page
Keep monitoring this behavior is expected unless it becomes sustained or impacts performance.
If the above information is helpful, please give us Kudos and mark the response as Accepted as solution.
Best Regards,
Community Support Team _ C Srikanth.