Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Fabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now

Reply
AntoineW
Super User
Super User

Microsoft Fabric Self-Service at Scale: Best Practices for Capacity Optimization

Hello everyone,

I’d like to get your feedback and best practices on optimizing Fabric capacity (CU) consumption in a large-scale self-service analytics context.

Context

  • 4 semantic models implemented by the IT team in a self-service approach

  • Models with large data volumes and complex queries

  • Hybrid mode: Import + Direct Lake
    → Very limited CU performance gains observed with Direct Lake

  • Business users rely heavily on Fabric shortcuts in their own workspaces to access data and build reports

  • Strong company strategy to empower business users and enable autonomy

 

Capacity challenges

  • Current capacity: F64, frequently saturated

  • In practice, the capacity is often saturated by only a few users (heavy report usage, complex queries, refresh or exploration patterns)

  • There is no native way to cap or limit capacity usage per user

  • Power BI Pro licenses for all users are not an option

  • Moving to F128 is being considered, but it may also quickly become saturated

  • According to App Metrics:

    • Significant consumption peaks

    • 2 semantic models consuming most of the CU

    • Multiple reports connected to the same models

 

Business usage

  • Around 500–600 reports created by users on top of these semantic models

  • Limited visibility and governance on:

    • Which reports are actually used

    • Visual design quality

    • Filtering and query patterns

  • While classic optimizations exist (reducing visuals, restrictive filters, DAX best practices), enforcing them at enterprise scale in a self-service context is challenging


Question to the community

👉 What are your recommended best practices to optimize Fabric CU consumption in this kind of scenario?

In particular:

  • Effective ways to identify and control capacity-heavy users or reports, given the lack of per-user throttling

  • Techniques to better distribute load (capacities, workspaces, time-based usage, etc.)

Thanks in advance for your insights and experience sharing 🙏
All real-world feedback is highly appreciated.

1 ACCEPTED SOLUTION
v-priyankata
Community Support
Community Support

Hi @AntoineW 

Thank you for reaching out to the Microsoft Fabric Forum Community.

 

If you look at Microsoft’s guidance for Fabric and Power BI, the first step isn’t to jump from F64 to F128. The real impact usually comes from cleaning up the foundation before scaling anything. Start by opening the Fabric Capacity Metrics app and checking what’s actually consuming your CUs. In most cases, it’s not everything, it’s a couple of semantic models and a handful of reports driving the majority of the load.

Once you know where the pressure is coming from, focus on the model design. Make sure you’re using a proper star schema, remove unused columns, reduce high cardinality fields where possible, avoid bi-directional relationships, and simplify heavy DAX patterns. Even with Direct Lake, complex queries and inefficient models will still burn through capacity.

If you have large fact tables that are queried frequently, adding aggregation tables can make a big difference. It’s one of the most effective ways to reduce query load without changing the business experience.

At the report level, small improvements go a long way. Too many visuals on a page, unnecessary interactions, or overly complex visuals can all increase query pressure. Reviewing the most-used reports and simplifying them often delivers noticeable CU savings.

Since there’s no per-user throttling in Fabric, governance becomes important. You can isolate heavy semantic models into their own capacity or workspace, separate refresh workloads from interactive usage, stagger refresh schedules, and promote certified datasets to avoid uncontrolled duplication.

Only after doing these steps should you consider scaling to F128. Increasing capacity without addressing design and workload structure will likely just postpone the saturation problem, not solve it.

 

If there are any deviations from your expectation please let us know we are happy to address.

Thanks.

View solution in original post

2 REPLIES 2
v-priyankata
Community Support
Community Support

Hi @AntoineW 

Thank you for reaching out to the Microsoft Fabric Forum Community.

 

If you look at Microsoft’s guidance for Fabric and Power BI, the first step isn’t to jump from F64 to F128. The real impact usually comes from cleaning up the foundation before scaling anything. Start by opening the Fabric Capacity Metrics app and checking what’s actually consuming your CUs. In most cases, it’s not everything, it’s a couple of semantic models and a handful of reports driving the majority of the load.

Once you know where the pressure is coming from, focus on the model design. Make sure you’re using a proper star schema, remove unused columns, reduce high cardinality fields where possible, avoid bi-directional relationships, and simplify heavy DAX patterns. Even with Direct Lake, complex queries and inefficient models will still burn through capacity.

If you have large fact tables that are queried frequently, adding aggregation tables can make a big difference. It’s one of the most effective ways to reduce query load without changing the business experience.

At the report level, small improvements go a long way. Too many visuals on a page, unnecessary interactions, or overly complex visuals can all increase query pressure. Reviewing the most-used reports and simplifying them often delivers noticeable CU savings.

Since there’s no per-user throttling in Fabric, governance becomes important. You can isolate heavy semantic models into their own capacity or workspace, separate refresh workloads from interactive usage, stagger refresh schedules, and promote certified datasets to avoid uncontrolled duplication.

Only after doing these steps should you consider scaling to F128. Increasing capacity without addressing design and workload structure will likely just postpone the saturation problem, not solve it.

 

If there are any deviations from your expectation please let us know we are happy to address.

Thanks.

Hi @AntoineW 

Thank you for reaching out to the Microsoft Fabric Forum Community.

 

I hope the information provided was helpful. If you still have questions, please don't hesitate to reach out to the community.

 

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

February Fabric Update Carousel

Fabric Monthly Update - February 2026

Check out the February 2026 Fabric update to learn about new features.