Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi,
We need to setup a semantic model that joins multiple tables . This setup is similar to the setup of our SAP BO Universe design tool.
We need some recommendations to create a solution keeping future maintenance and performance.
By design
Some of the table size are huge .
Users can use any fields and create their own query.
How to handle Size of the Semantic model when it grows.
Considering the performance the recommendation for huge tables will be using direct query.
We have other ideas like splitting huge tables into 2 and provide 2 tables one with import and other as direct and do the merge inside. Would like to hear some suggestions and recommendations. Thanks
Solved! Go to Solution.
Hi @KrishnakumarKS ,
Thanks for reaching out to the Microsoft fabric community forum.
I would also take a moment to thank @Poojara_D12 , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
Please go through the documentation links from below:
Use composite models in Power BI Desktop - Power BI | Microsoft Learn
https://learn.microsoft.com/en-us/power-bi/guidance/directquery-model-guidance
If this post was helpful, please give us Kudos and consider marking Accept as solution to assist other members in finding it more easily.
Best Regards,
Menaka
Community Support Team
Hi @KrishnakumarKS ,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hi @KrishnakumarKS ,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution so that other community members can find it easily.
Thank you.
Hi @KrishnakumarKS ,
Thanks for reaching out to the Microsoft fabric community forum.
I would also take a moment to thank @Poojara_D12 , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
Please go through the documentation links from below:
Use composite models in Power BI Desktop - Power BI | Microsoft Learn
https://learn.microsoft.com/en-us/power-bi/guidance/directquery-model-guidance
If this post was helpful, please give us Kudos and consider marking Accept as solution to assist other members in finding it more easily.
Best Regards,
Menaka
Community Support Team
Hi @KrishnakumarKS ,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hi @KrishnakumarKS ,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution so that other community members can find it easily.
Thank you.
Use Composite Models (Import + DirectQuery):
Split Large Tables Logically:
Define Thin Layer (Semantic Model) Separately:
Use Aggregations for Performance Boost:
Field Usage Governance:
Use star schema where possible.
Keep model size < 1 GB for Import mode (for optimal performance).
Monitor with Performance Analyzer + DAX Studio.
Document model logic like SAP BO Universe layer.
Designing a scalable and performant semantic model in Power BI that mirrors the flexibility of an SAP BO Universe requires careful planning, especially when working with large tables and allowing users to explore data freely. To handle growing data sizes and ensure long-term maintainability, a hybrid approach—combining Import and DirectQuery storage modes—can be effective, particularly using composite models. For massive fact tables, it's wise to split them based on usage patterns (e.g., recent data in Import for performance, historical data in DirectQuery for storage efficiency), and then use a composite model with relationships or calculated tables to stitch them together. However, merging Import and DirectQuery tables at query time can introduce latency and complexity, so careful modeling is key. Always filter DirectQuery tables down to essential columns and rows using parameters or user-defined filters to reduce query load. Additionally, creating aggregations on imported subsets of large tables can significantly boost performance for common queries while retaining access to full detail in DirectQuery when needed. From a maintenance perspective, it's critical to document table purposes clearly, define semantic layer naming conventions, and use certified datasets or dataflows to centralize logic. Regular monitoring with tools like Performance Analyzer and usage metrics can help you tune the model over time. Also, consider limiting ad hoc querying in unmanaged environments—like My Workspace—and promoting self-service within governed, shared workspaces with pre-defined metrics and usage guidelines.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!