Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
Koritala
Helper V
Helper V

Centralize table refresh in Semantic Model Build

Hi All,

I have 4 semantic models and tables which are used in these 4 semantic models are duplicates(means same tables are being used in 4 semantic models).

Due to above structure, when we refresh the models, same table is getting refreshed multiple times in a day. That is unnecessary burden and cost involved with respect to data processing as my data base is in cloud.

To avoid the above issue, could you please suggest the best design approach where we can create a centtralized semantic model it shoulkd server for all the reports and avoid the using of duplicate tables. 

I am using PPU License in my environment. My goal is If I consuming customer table across the semantic models, the soruce of this table shouls be one place and we will refer the same Customer table across the semantic models. So that data fresh on the same table happenes once instead of multiple runs when we shhedule the auto refresh in the other semantic models.

Thanks,

Sri. 

 

14 REPLIES 14
v-pgoloju
Community Support
Community Support

Hi @Koritala,

 

Just following up to see if the Response provided by community members were helpful in addressing the issue. if the issue still persists Feel free to reach out if you need any further clarification or assistance.

 

Best regards,
Prasanna Kumar

v-pgoloju
Community Support
Community Support

Hi @Koritala,

 

Thank you for reaching out to the Microsoft Fabric Forum Community, and special thanks to @d_m_LNK , @Nabha-Ahmed  and @danextian  for prompt and helpful responses.

Just following up to see if the Response provided by community members were helpful in addressing the issue. if the issue still persists Feel free to reach out if you need any further clarification or assistance.

 

Best regards,
Prasanna Kumar

 

danextian
Super User
Super User

Have you thought about using dataflows as an intermediate data source rather than connecting directly to your cloud system? This approach queries the cloud source only once via the dataflow, reducing repeated egress calls. In my setup, I split the data into historical and current portions. The historical data is refreshed only when changes occur, while the current data is refreshed on a schedule. With PPU, you can chain these dataflows daily so they are combined in another dataflow.





Dane Belarmino | Microsoft MVP | Proud to be a Super User!

Did I answer your question? Mark my post as a solution!


"Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand."
Need Power BI consultation, get in touch with me on LinkedIn or hire me on UpWork.
Learn with me on YouTube @DAXJutsu or follow my page on Facebook @DAXJutsuPBI.

Hi danextian,

I tried to replicate the solution with your inputs. But As my tables are having huge volume of data and client want the data from the past 15 years, Even I try to create dataflows for the individual years, dataflow throwing the error saying that maximum memory exceeded.

Can you please suggest how to overcome in this case. Any sample dataflow can you share for my better understanding. We have PPU license in our project.

Thanks,

Sri

Hi @Koritala,

 

Dataflow memory errors occur because transformations are done fully in memory. For 15+ years of high volume data, push filtering and shaping to the source, split historical vs current data using database views, minimize transformations in dataflows, and refresh historical data only when required. For very large fact tables, incremental refresh in the semantic model is usually more reliable than dataflows.

 

Thanks & Regards,

Prasanna Kumar

d_m_LNK
Responsive Resident
Responsive Resident

I think a solution for you would be to create a "master" dataset that you publish to your workspace.  Create all needed measures and relationships on this model (this model will ideally have no visuals).  Then for your reports create "thin" reports on top of this semantic model.  To create the "thin" report, in Desktop when you start creating your report, instead of starting with "Get Data" and building the model you start by going to OneLake catalogue and selecting "Power BI Semantic models":

d_m_LNK_0-1765423649927.png

You can then select your published data model and build your visuals on top of that model using all the same measures.  You then only have one model to refresh and multiple reports can use that one model.  This will get rid of the duplicate semantic models and copies of data.

Hi d_m_LNK,

Thanks for your quick response.

As you suggested I tried to Import all the tables into Master Model. But my challenge is that I have a very few very large tables and not allowed me to import those tables into Master model. Can you plz suggest how to over come this scenario? We have PPU License. In desktop, when I try to import very large tables, it is saying memory issue and can't use Import mode.

Thanks,

Sri.

d_m_LNK
Responsive Resident
Responsive Resident

You may be able to set up incramental refresh depending on your large table.  This will create partitions at whatever cadence you set in the refresh policy.  Documentation is here: Configure incremental refresh for Power BI semantic models - Power BI | Microsoft Learn

 

This will allow you to set parameters for the data that is loaded in the model so you can at least get it published.

 

Do you need all the data in the model? Is it an option for you to write custom SQL against your large table to only pull in recent data?  not knowing the exact scenario I just giving options I have worked with.

Nabha-Ahmed
Memorable Member
Memorable Member

Hi @Koritala 

I hope you are doing good!

Here are the best-practice approaches:

1. Build a Central “Golden” Semantic Model

Create one primary semantic model that contains all shared / reusable tables (e.g., Customer, Date, Product).
This model becomes your single source of truth.

2. Use “DirectQuery for Power BI Semantic Models”

In your other semantic models, instead of importing the tables again, use: Home → Get data → Power BI semantic models → DirectQuery
This allows downstream models and reports to query the central model without re-refreshing the data.

Benefits:

Refresh happens once in the central model.

No duplicated data processing.

Tables stay consistent across all models.

Lower cost + better governance.


3. Consider Using Domains (Fabric) for Better Organization

If you're in a PPU environment, you can still follow the same principle and organize your golden model under a shared workspace for all report developers to connect to.

4. Optionally Use “Composite Models”

If some models need extra tables, they can still import only the unique tables and reuse the shared tables via DirectQuery from the centralized model.

Hi Nabha-Ahmed,

Thanks for your quick response.

As you suggested I tried to Import all the tables into Master Model. But my challenge is that I have a very few very large tables and not allowed me to import those tables into Master model. Can you plz suggest how to over come this scenario? We have PPU License. In desktop, when I try to import very large tables, it is saying memory issue and can't use Import mode.

Thanks,

Sri.

Thanks for the update.
When working with very large tables in a PPU environment, it’s normal to hit memory limits when trying to use Import mode in a centralized model.
You can still build a Master Model without importing everything.

Here’s the recommended approach to overcome the issue:


 1. Import only small/shared tables into the Master Model

Keep your dimensions or any lightweight tables in Import mode — these are usually not a problem for PPU memory.


2. Keep the very large tables in DirectQuery mode

Instead of importing the large fact tables, connect them using DirectQuery.
This avoids memory pressure and still allows the Master Model to act as your central source.


3. Build a composite (hybrid) semantic model

Your Master Model can combine:

Imported small tables

DirectQuery large tables
This is fully supported in PPU and is a common design pattern for big datasets.

 

4. Let all other semantic models reuse the Master Model

From your downstream models, connect to the Master Model using:
Get Data → Power BI semantic models → DirectQuery
This ensures the large tables refresh only once and are not duplicated across models.

 

5. If needed, apply incremental refresh on large tables

If a large table must be imported (partially), use incremental refresh to reduce memory usage during refresh.

 

Hi Nabha-Ahmad,

In Master model If I connect Large tables using Direct Query mode, I think we can't setup incremental refresh for Large tables as it uses Direct Query in Master model.
As you said, in down stream models, we connect to Master model and pull the required tables.
and disable the enable load option for all the tables in child models.

Please let me know if my understanding is correct or not?

DirectQuery tables cannot use Incremental Refresh.
This is the same point I mentioned earlier, but I’m happy to restate it:

If a table is in DirectQuery, Incremental Refresh isn’t available.

Incremental Refresh works only when the table is in Import or Hybrid mode.


For downstream models, yes — connecting to the Master Model and disabling Enable Load on local tables is the correct approach.

Let me know if you need a quick example or diagram; happy to help.

 

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.