Don't miss your chance to take exam DP-600 or DP-700 on us!
Request nowFabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now
Hi everyone,
I'm working on organizing data in a Microsoft Fabric Lakehouse and would appreciate some guidance on best practices.
I have:
I'm considering three options for organizing the data:
My main goals are:
Has anyone faced a similar scenario? What structure would you recommend for scalability and performance in Power BI?
Thanks in advance!
Solved! Go to Solution.
Use One Fact Table and Multiple Dimensions so easy to understand Dimensional Modelling/ Kimball Methodologies Approach.
Each Project <--> Each Lakehouse <--> One Fact Table_Multiple Dimensions <--> Best Practices of Kimball Methodology <--> star schema approach
This way we can easily divide our Power BI Reports & easy to achieve best practices.
Hi @Alaahady ,
Thank you for reaching out to Microsoft Community.
To ensure optimal performance, scalability, and reusability in Power BI, the best practice is to organize all fact and dimension tables within a single Microsoft Fabric Lakehouse or Warehouse. This centralized structure aligns well with the star schema modeling approach, which is highly optimized for Power BI’s in-memory engine and semantic modeling capabilities.
A unified Lakehouse simplifies governance and maintenance by providing a single location to manage schemas, data pipelines, and security controls. It also enhances reusability, as dimension tables can be shared across multiple fact tables and reports without duplication. With all relationships defined in one place, it becomes easier to enforce consistent logic, implement role-playing dimensions (e.g., multiple date keys), and streamline report development.
From a performance standpoint, this approach minimizes the complexity of cross-Lakehouse joins and leverages Fabric’s efficient indexing to improve query response times. It also supports more advanced Power BI scenarios such as composite models and semantic layers, with fewer complications in model building or DAX measure development.
To maintain clarity and manageability within the centralized Lakehouse, follow these key best practices:
Adopt a clear star schema, placing fact tables at the center and surrounding them with related dimension tables.
Use consistent table naming conventions such as Fact_Calls, Dim_Agent, and Dim_Date to improve readability.
Create views for report-specific models to tailor the data structure for each report without duplicating core tables.
Leverage Lakehouse shortcuts if you need to reuse dimension tables across multiple Lakehouses in the future.
Implement incremental data loading via Fabric Dataflows Gen2 or Notebooks to maintain performance on large datasets.
Use Power BI composite models when necessary, allowing direct query access to large fact tables while importing dimensions for speed.
Overall, this centralized design not only supports efficient data modeling and performance but also ensures simplified maintenance, scalable governance, and high reusability across Power BI reports.
Hope this helps.
Best Regards,
Chaithra E.
Hi @Alaahady ,
We’d like to follow up regarding the recent concern. Kindly confirm whether the issue has been resolved, or if further assistance is still required. We are available to support you and are committed to helping you reach a resolution.
Thank you for your patience and look forward to hearing from you.
Best Regards,
Chaithra E.
What did you end up using for this?
A single Lakehouse/Warehouse containing all fact and dimension tables. This aligns best with your goals of scalability, performance, and governance—especially in a Microsoft Fabric environment.
Hi @Alaahady ,
We’d like to follow up regarding the recent concern. Kindly confirm whether the issue has been resolved, or if further assistance is still required. We are available to support you and are committed to helping you reach a resolution.
Thank you for your patience and look forward to hearing from you.
Best Regards,
Chaithra E.
Thank you for your feedbac, I still can see a pros and cons for each solution
Hi @Alaahady ,
Thank you for reaching out to Microsoft Community.
To ensure optimal performance, scalability, and reusability in Power BI, the best practice is to organize all fact and dimension tables within a single Microsoft Fabric Lakehouse or Warehouse. This centralized structure aligns well with the star schema modeling approach, which is highly optimized for Power BI’s in-memory engine and semantic modeling capabilities.
A unified Lakehouse simplifies governance and maintenance by providing a single location to manage schemas, data pipelines, and security controls. It also enhances reusability, as dimension tables can be shared across multiple fact tables and reports without duplication. With all relationships defined in one place, it becomes easier to enforce consistent logic, implement role-playing dimensions (e.g., multiple date keys), and streamline report development.
From a performance standpoint, this approach minimizes the complexity of cross-Lakehouse joins and leverages Fabric’s efficient indexing to improve query response times. It also supports more advanced Power BI scenarios such as composite models and semantic layers, with fewer complications in model building or DAX measure development.
To maintain clarity and manageability within the centralized Lakehouse, follow these key best practices:
Adopt a clear star schema, placing fact tables at the center and surrounding them with related dimension tables.
Use consistent table naming conventions such as Fact_Calls, Dim_Agent, and Dim_Date to improve readability.
Create views for report-specific models to tailor the data structure for each report without duplicating core tables.
Leverage Lakehouse shortcuts if you need to reuse dimension tables across multiple Lakehouses in the future.
Implement incremental data loading via Fabric Dataflows Gen2 or Notebooks to maintain performance on large datasets.
Use Power BI composite models when necessary, allowing direct query access to large fact tables while importing dimensions for speed.
Overall, this centralized design not only supports efficient data modeling and performance but also ensures simplified maintenance, scalable governance, and high reusability across Power BI reports.
Hope this helps.
Best Regards,
Chaithra E.
Hi, I completely agree with the best practices mentioned. I just wanted add one scenario, lets say the granularity(grain size) is dfferent for each fact tables then what will be the best choice if we don't to loose on the deatails.
Regards,
Amit
Hi @AmitDevkatte ,
Different grains are expected, supported, and recommended, keep facts separate, align them with conformed dimensions, and let Power BI’s semantic model handle analysis across them. This preserves detail while remaining scalable and performant.
When fact tables are large or queried frequently at higher levels, keep base fact tables at the lowest grain (for example: call-level, email-level, case-level).
Add aggregated fact tables where needed, for example:
Calls by Day / Agent
Emails by Day / Agent
Cases by Day / Status
Power BI can automatically hit the aggregate table for high-level visuals and fall back to the base fact when users drill down improving performance without losing detail.
Regards,
Chaithra E.
Yes. Make sense. Thank you.
Use One Fact Table and Multiple Dimensions so easy to understand Dimensional Modelling/ Kimball Methodologies Approach.
Each Project <--> Each Lakehouse <--> One Fact Table_Multiple Dimensions <--> Best Practices of Kimball Methodology <--> star schema approach
This way we can easily divide our Power BI Reports & easy to achieve best practices.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 2 | |
| 2 | |
| 2 | |
| 1 | |
| 1 |