Fabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now
Hello everyone,
I’d like to get your feedback and best practices on optimizing Fabric capacity (CU) consumption in a large-scale self-service analytics context.
4 semantic models implemented by the IT team in a self-service approach
Models with large data volumes and complex queries
Hybrid mode: Import + Direct Lake
→ Very limited CU performance gains observed with Direct Lake
Business users rely heavily on Fabric shortcuts in their own workspaces to access data and build reports
Strong company strategy to empower business users and enable autonomy
Current capacity: F64, frequently saturated
In practice, the capacity is often saturated by only a few users (heavy report usage, complex queries, refresh or exploration patterns)
There is no native way to cap or limit capacity usage per user
Power BI Pro licenses for all users are not an option
Moving to F128 is being considered, but it may also quickly become saturated
According to App Metrics:
Significant consumption peaks
2 semantic models consuming most of the CU
Multiple reports connected to the same models
Around 500–600 reports created by users on top of these semantic models
Limited visibility and governance on:
Which reports are actually used
Visual design quality
Filtering and query patterns
While classic optimizations exist (reducing visuals, restrictive filters, DAX best practices), enforcing them at enterprise scale in a self-service context is challenging
👉 What are your recommended best practices to optimize Fabric CU consumption in this kind of scenario?
In particular:
Effective ways to identify and control capacity-heavy users or reports, given the lack of per-user throttling
Thanks in advance for your insights and experience sharing 🙏
All real-world feedback is highly appreciated.
Solved! Go to Solution.
Hi @AntoineW
Thank you for reaching out to the Microsoft Fabric Forum Community.
If you look at Microsoft’s guidance for Fabric and Power BI, the first step isn’t to jump from F64 to F128. The real impact usually comes from cleaning up the foundation before scaling anything. Start by opening the Fabric Capacity Metrics app and checking what’s actually consuming your CUs. In most cases, it’s not everything, it’s a couple of semantic models and a handful of reports driving the majority of the load.
Once you know where the pressure is coming from, focus on the model design. Make sure you’re using a proper star schema, remove unused columns, reduce high cardinality fields where possible, avoid bi-directional relationships, and simplify heavy DAX patterns. Even with Direct Lake, complex queries and inefficient models will still burn through capacity.
If you have large fact tables that are queried frequently, adding aggregation tables can make a big difference. It’s one of the most effective ways to reduce query load without changing the business experience.
At the report level, small improvements go a long way. Too many visuals on a page, unnecessary interactions, or overly complex visuals can all increase query pressure. Reviewing the most-used reports and simplifying them often delivers noticeable CU savings.
Since there’s no per-user throttling in Fabric, governance becomes important. You can isolate heavy semantic models into their own capacity or workspace, separate refresh workloads from interactive usage, stagger refresh schedules, and promote certified datasets to avoid uncontrolled duplication.
Only after doing these steps should you consider scaling to F128. Increasing capacity without addressing design and workload structure will likely just postpone the saturation problem, not solve it.
If there are any deviations from your expectation please let us know we are happy to address.
Thanks.
Hi @AntoineW
Thank you for reaching out to the Microsoft Fabric Forum Community.
If you look at Microsoft’s guidance for Fabric and Power BI, the first step isn’t to jump from F64 to F128. The real impact usually comes from cleaning up the foundation before scaling anything. Start by opening the Fabric Capacity Metrics app and checking what’s actually consuming your CUs. In most cases, it’s not everything, it’s a couple of semantic models and a handful of reports driving the majority of the load.
Once you know where the pressure is coming from, focus on the model design. Make sure you’re using a proper star schema, remove unused columns, reduce high cardinality fields where possible, avoid bi-directional relationships, and simplify heavy DAX patterns. Even with Direct Lake, complex queries and inefficient models will still burn through capacity.
If you have large fact tables that are queried frequently, adding aggregation tables can make a big difference. It’s one of the most effective ways to reduce query load without changing the business experience.
At the report level, small improvements go a long way. Too many visuals on a page, unnecessary interactions, or overly complex visuals can all increase query pressure. Reviewing the most-used reports and simplifying them often delivers noticeable CU savings.
Since there’s no per-user throttling in Fabric, governance becomes important. You can isolate heavy semantic models into their own capacity or workspace, separate refresh workloads from interactive usage, stagger refresh schedules, and promote certified datasets to avoid uncontrolled duplication.
Only after doing these steps should you consider scaling to F128. Increasing capacity without addressing design and workload structure will likely just postpone the saturation problem, not solve it.
If there are any deviations from your expectation please let us know we are happy to address.
Thanks.
Hi @AntoineW
Thank you for reaching out to the Microsoft Fabric Forum Community.
I hope the information provided was helpful. If you still have questions, please don't hesitate to reach out to the community.
| User | Count |
|---|---|
| 22 | |
| 11 | |
| 10 | |
| 6 | |
| 6 |
| User | Count |
|---|---|
| 35 | |
| 24 | |
| 20 | |
| 15 | |
| 14 |