Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hello.
I have a few questions about incremental updates, I am sure that here I will find the best answer.
In my company we use to have a table schema wich constist in a table with the information of the current day and another historical table with the stored information, so in Power BI we use to load both tables and then join the historical with the current. Then, once joined, we don't charge the historical into the dashboard to save space. The thing is that, now that we are migrating from Report Server to Cloud, we want to enable incremental updates. We create the parameters and apply them over the historical table, which is the one that contains the needed datetime field.
As we don't charge the historical table to the dashboard, we can't enable incremental updates. We "solved" it loading that table. Incremental updates are working, I can see the partitions in Management Studio and the load times are lower, but I don't know if that's the correct approach... I guess that I can apply the parameters over the final table once joined, but I suspect that if I do it that way, the historical table would have loaded the full data, so I did it over that historical table before the join.
Am I right? Is that the correct approach? Or, if I apply the parameters over the final table, the system would be "smart" enough to update incrementally the other table?
Thanks.
Thank you!!
My doubt was more about the eficiency of what we made. Because of the query folding, we still could not apply the incremental refresh to the final table, but yes to one the tables that at the end of the process are part of that final table. Until that, we didn't charge this table into the report but we enabled it just to enable incremental refresh on it.
I wanted to be sure if that (although it increases the size of the pbix file) gives more eficiency to the process and charges only the incremental part of the data into the final table.
I assume you mean the Incremental Refresh feature.
You can certainly create an incremental Refresh setup for a static table in order to store it into different partitions in the Power BI Semantic Model's table. However there is not much of an advantage to it if the original data source is small and fast. Incremental Refresh is more suited for data sources that are large and slow (and ideally immutable).
Note there is a variation on the Incremental Refresh setup that allows you to connect to the "Today" partition in Direct Query mode and to all other partitions via the Semantic Model. That might be a fit for your scenario.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 48 | |
| 46 | |
| 44 | |
| 16 | |
| 15 |