Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Fabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now

Reply
Nagarani
New Member

Fabric Capacity Consumption

We have one fabric dashboard with below stats 

Semantic Model :

Data Sources used in Model :SQL Server Endpoint (Fabric warehouse) and SharePoint Files
Semantic Model Structure : Snowflake Schema
Connectivity Mode : Import
Total tables including calculated tables created in PBI desktop : 17 (Fact :1 (~95M rows); Dimensions : 😎
Relationships : 15 ( M2O : 13; M2M : 2)
Transformations performed are standard Power Query transformations, which includes renaming, filtering and merging the tables

Dashboard :

Six main report pages
Performance Overview : 4 visuals ; 4 slicers
Focus Areas Overview : 4 visuals ; 7 slicers
Customer Tracker : 7 visuals ; 5 slicers
Customer Tracker Deep Dive : 19 Visuals, 9 Slicers
Customer Tracker Loyalty programs : 9 visuals , 9 Slicers

Increase Market share : 4 visuals ; 4 slicers

RLS : Dynamic RLS is implemented with simple DAX logic like mail = userprincipalname() on the RLS table and we have a join to the fact table from RLS table with many to many relationship.

Capacity info : F128 

 

Issue : Report performance is all good all the visuals rendering is happening within 2 seconds in all the pages. But when doing the stress testing of this dashboard by giving access to 130 users and asking them to login and test in a given period of time report seems to slow down and capacity utilization is going to 158%.

 

Questions:

1. What is the possible reasons for causing this issue in capactity issue?

2. What is the best way of doing stress testing for this dashboard?

3. What are the benchmarks we can set for such a dashboard having 97M records with Import mode?

4. What are the recommendataions from a developer end and platform end to optimize?

 

Appreciate any quick response on this issue as we are struck with UAT and rolling out this report to large number of audience groups.

 

Thanks in Advance

 



2 ACCEPTED SOLUTIONS
svenchio
Super User
Super User

Hi @Cookistador , I'm not an super expert on Power BI report optimization, I've seen very knowledgable people around here and I'm confident they will suggest on that, and I fully agree with the overview by @Cookistador

 

I'm going to give a few pointers on "elaborate on the below point how we can identity which operation is occupying maximum capacity"  as I think based on your complex/high concurrent environment, you need a very detailed and strong monitoring framework... and yes, the obvious option would be using your monitoring workspace (read more on this https://learn.microsoft.com/en-us/fabric/admin/monitoring-workspace) but I think you should invest the time and implement  Fabric Unified Admin Monitoring (FUAM)! It is as of today, the best monitoring framework for Fabric! 

 

I know it seems you have bigger fish to fry (the performance issue you're facing) but in order to do a correct diagnostics, data and information about what running on your workspaces both on the background and interactive activities (kudos @Cookistador ) is critical, so, I strongly suggest for you to implement FUAM ... is time & effort worth investing 😉  read more about it here  👉 https://github.com/GT-Analytics/fuam-basic

 

Best of lucks and I hope you find this information useful, if so, a kudos would be nice ... cheers 

View solution in original post

v-hashadapu
Community Support
Community Support

Hi @Nagarani , Thank you for reaching out to the Microsoft Community Forum.

 

The combination of dynamic RLS + many to many security + a 95M row fact table means every user generates a different DAX query plan. Power BI can’t reuse cache between users, so the engine must re-scan large parts of the fact table for every visual, for every user. On the deep-dive page alone, 19 visuals × 130 users become thousands of heavy queries hitting the capacity at the same time, which is why an F128 spikes to 158% even though a single user is fast.

 

You can see this directly in Fabric Capacity Metrics -> Interactive -> Query details. You’ll find a large number of ExecuteQueries with high CPU and very low cache reuse, especially when users open the deep-dive page or change slicers. Performance Analyzer or DAX Studio will show which visuals are expensive for one user, but Fabric metrics is what proves the concurrency + RLS problem.

 

Focus on the model, not just the capacity. Replace M2M RLS with a 1-to-many security bridge, pre-filter users to their allowed dimension keys and avoid bi-directional security paths. Pre-aggregate the 95M-row fact to the grain your visuals actually use and reduce the number of visuals and slicers on the heavy pages. Once cache can be shared again, the same F128 will handle far more users; without that, even larger capacities will keep hitting the same wall.

 

View solution in original post

8 REPLIES 8
v-hashadapu
Community Support
Community Support

Hi @Nagarani , Hope you're doing okay! May we know if it worked for you, or are you still experiencing difficulties? Let us know — your feedback can really help others in the same situation.

v-sgandrathi
Community Support
Community Support

Hi @Nagarani,

 

Just checking in! We noticed you haven’t responded yet, and we want to make sure your question gets fully resolved.

Let us know if you need any additional support.

Thanks again for contributing to the Microsoft Fabric Community!

v-hashadapu
Community Support
Community Support

Hi @Nagarani , Thank you for reaching out to the Microsoft Community Forum.

 

The combination of dynamic RLS + many to many security + a 95M row fact table means every user generates a different DAX query plan. Power BI can’t reuse cache between users, so the engine must re-scan large parts of the fact table for every visual, for every user. On the deep-dive page alone, 19 visuals × 130 users become thousands of heavy queries hitting the capacity at the same time, which is why an F128 spikes to 158% even though a single user is fast.

 

You can see this directly in Fabric Capacity Metrics -> Interactive -> Query details. You’ll find a large number of ExecuteQueries with high CPU and very low cache reuse, especially when users open the deep-dive page or change slicers. Performance Analyzer or DAX Studio will show which visuals are expensive for one user, but Fabric metrics is what proves the concurrency + RLS problem.

 

Focus on the model, not just the capacity. Replace M2M RLS with a 1-to-many security bridge, pre-filter users to their allowed dimension keys and avoid bi-directional security paths. Pre-aggregate the 95M-row fact to the grain your visuals actually use and reduce the number of visuals and slicers on the heavy pages. Once cache can be shared again, the same F128 will handle far more users; without that, even larger capacities will keep hitting the same wall.

 

KevinChant
Super User
Super User

Have you checked with performance Analyzer and DAX Studio yet?

svenchio
Super User
Super User

Hi @Cookistador , I'm not an super expert on Power BI report optimization, I've seen very knowledgable people around here and I'm confident they will suggest on that, and I fully agree with the overview by @Cookistador

 

I'm going to give a few pointers on "elaborate on the below point how we can identity which operation is occupying maximum capacity"  as I think based on your complex/high concurrent environment, you need a very detailed and strong monitoring framework... and yes, the obvious option would be using your monitoring workspace (read more on this https://learn.microsoft.com/en-us/fabric/admin/monitoring-workspace) but I think you should invest the time and implement  Fabric Unified Admin Monitoring (FUAM)! It is as of today, the best monitoring framework for Fabric! 

 

I know it seems you have bigger fish to fry (the performance issue you're facing) but in order to do a correct diagnostics, data and information about what running on your workspaces both on the background and interactive activities (kudos @Cookistador ) is critical, so, I strongly suggest for you to implement FUAM ... is time & effort worth investing 😉  read more about it here  👉 https://github.com/GT-Analytics/fuam-basic

 

Best of lucks and I hope you find this information useful, if so, a kudos would be nice ... cheers 

Cookistador
Super User
Super User

Hello @Nagarani 

 

If your organization has Power BI pro licence (included with M365 E5), you can consider moving the workspace back to pro workspace. Since your semantic model uses Import Mode, you don't need Fabric Capacity- unless your users don't have E5 licence and are relying on Fabric capacity to view reports

 

In fabric, there are two categories of activities:

-Background activity: refresh activity, pipeline excution,...

-Interactive activities: Using Copilot, consulting a power BI report

 

The background activities are smoothed on 24h hours, but interactive activities are smoothed on 5 minutes, this is why you are seeing this peak during your stress tests

 

My first recommendation is to indetify actual cost of the report:

open the report

Refresh and check the Fabric capacity metric

Apply a few filters, navigate trough many pages and identify which operations consume the most capacity

 

based on your report structure, the customer track deep dive page is most likely source of heavy load dues to its high number of visuals and slicers

Thanks for the quick response and valuable inputs!!

Can you please elaborate on the below point how we can identity which operation is occupying maximum capacity. Can we do it using Fabric Capacity or from the Perfomance analyzer and checking the max query time taken?

"Apply a few filters, navigate trough many pages and identify which operations consume the most capacity"

Yes, you can see that in the Fabric capacity app, just make a few modification, then drill down on the visuals to have the details for the 30s
with this approach, you can indentify which operation is heavy, it is pretty weird to kill a F128 only with a report, so an operation must be very heavy somewhere

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

February Fabric Update Carousel

Fabric Monthly Update - February 2026

Check out the February 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.