Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now
I have a .pbix file that has around 15 tables that are sourced from different Dataflows. And I have one table (largest in the model) that can be sourced from either SQL Server or Oracle (not from a dataflow). I desire to set up incremental refresh on it (scheduled to refresh daily). And I want an incremental refresh policy of 13 months of history plus refresh any of those months where the last update field has a new date. So I'm using the Detect Data Changes option (see screenshot below). I made sure the query on that table has query folding. When I set the file up this way and publish to the service the SQL version times out on the initial load after 5 hours. Don't know why because the query on that table is very simple and has query folding. However, on the Oracle version the initial refresh worked (took almost 3.5 hours). But on the subsequent (second) refresh it took LONGER than the initial refresh...a little over 4 hours!! I thought the whole point of incremental refresh was that subsequent refreshes were supposed to be fast or at least quicker than the initial refresh. What in the world could be causing the second refresh to perform worse than the initial?
Solved! Go to Solution.
Hello @v-yingjl . It turns out that the nature of this data is such that at least 1 row from virtually all historical partitions gets updated each day (thereby causing all partions--the whole data table--to be refreshed each day). In addition, the field I was using for Detect Data Changes was not indexed. Those 2 factors amounted to combo knockout punch, so to speak. So I've stopped using Detect Data Changes for now until the DBAs can add the correct update field and index it. That will at least address 1 of the 2 factors; can't do much about so much of the past getting updated everyday. If that's not enough then I will just abandon the Detect Data Changes option and possibly even IR altogether.
Hi @robarivas ,
If you've fixed the issue on your own please kindly share your solution. If the above posts help, please kindly mark it as a solution to help others find it more quickly. Thanks!
Best Regards,
Yingjie Li
Hello @v-yingjl . It turns out that the nature of this data is such that at least 1 row from virtually all historical partitions gets updated each day (thereby causing all partions--the whole data table--to be refreshed each day). In addition, the field I was using for Detect Data Changes was not indexed. Those 2 factors amounted to combo knockout punch, so to speak. So I've stopped using Detect Data Changes for now until the DBAs can add the correct update field and index it. That will at least address 1 of the 2 factors; can't do much about so much of the past getting updated everyday. If that's not enough then I will just abandon the Detect Data Changes option and possibly even IR altogether.
Hi @robarivas ,
Best Regards,
Yingjie Li
If this post helps then please consider Accept it as the solution to help the other members find it more quickly.
Check out the October 2024 Power BI update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.