Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
automation
Frequent Visitor

Help needed: Detect Data Changes on Modified column with Incremental Refresh

Hi,
I’ve configured incremental refresh on a Power BI dataset.

  • The source is SQL views, and the underlying tables contain both Created and Modified date columns.

  • In Power Query, I applied the incremental range filters using the Created date (RangeStart and RangeEnd).

  • In the incremental refresh settings, I configured it to store 5 years and refresh the last 2 days, and I enabled “Detect data changes” based on the Modified date column.

    automation_0-1772796878109.png

     

The issue:
When a record from 2021 gets its Modified date updated, the change is not picked up by the incremental refresh, and the updated data does not appear in the dataset.

Can someone explain why this is happening and how I should correctly configure incremental refresh + detect data changes so that updates to older records (e.g., from 2021/2022 etc) are captured?

Regards,
Adeel Nazir

1 ACCEPTED SOLUTION

Hi @automation ,

After you publish the PBIX file, the first load will involve a complete refresh, meaning that all partitions will be processed. Please verify whether the size of the semantic model exceeds the capacity limits.


To overcome the size issue you can follow the below approach : 

define a parameter as LoadAllData 

Natarajan_M_0-1773836983165.png

 

 

 


In Power Query step 

Let
....
SampleData = Table.FirstN(Source, 10),
Check = if LoadAllData then Source else SampleData

in
Check




Keep the default value as False and publish the model to service and do the first refresh .

The first refresh will take care of creating all the partitions in the servivce , now set the LoadAllData to True and process the partitions one by one using ssms or fabric notebook.

 

 

Thanks

 

If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster

 

View solution in original post

10 REPLIES 10
v-prasare
Community Support
Community Support

Hi @automation,
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

v-prasare
Community Support
Community Support

Hi @automation,
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

automation
Frequent Visitor

Hi All,

I need some assistance with this issue. Can someone please help me and find a resolution?
Regards,
Adeel Nazir

Zanqueta
Super User
Super User

Hi @automation,

 

When using Detect Data Changes, only the incremental refresh period is eligible for refresh. Rows outside this period will not be reprocessed even if their change detection column is updated.
 
Partition the dataset using ModifiedDate (rather than CreatedDate) so that updated records fall within the incremental range and are reprocessed automatically. 
 
Use ModifiedDate as the column defining RangeStart and RangeEnd. This is the only configuration that reliably captures updates to older data.
 

Reference: Configure incremental refresh and real-time data for Power BI semantic models - Power BI | Microsoft...

If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster.
Connect with me on LinkedIn

Natarajan_M
Solution Sage
Solution Sage

Hi, @automation 

You are correctly configuring the incremental refresh by using the created datetime to set the RangeStart and RangeEnd, and using the modified datetime to detect changes.

Can you confirm whether the Modified field is in datetime format?
Check in your Power Query step whether that row is getting filtered unintentionally 


Additionally, I recommend configuring the refresh by months instead of years. This way, if a row is updated, the process will only refresh the data for that particular month rather than the entire year. This approach helps isolate the refresh and improves efficiency.

reference video : https://www.youtube.com/watch?v=JsJWBr1_ktQ

Thanks 
If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster

Hi @Natarajan_M ,

Thank you for your detailed response to my question I really appreciate it.

I’ve confirmed that the Modified date is stored as a DATETIME field.

Questions:

  1. I'm currently configured for "last 2 days data refresh" while keeping 5 years of historical data.
    If I change the historical data range to 60 months, should I also adjust the "last 2 days refresh" value accordingly?

  2. As shown in the screenshots attached to my original post, does the "last 2 days refresh only" setting explain why 2021 records with updated Modified dates are not being picked up by Detect Data Changes?

Hi @automation , The "Detect Data Changes" feature operates within the refresh context window. It retains historical data for up to 60 months and allows for incremental data refresh for the same duration. With this setting, all your partitions fall within the "Detect Data Changes" window, enabling the system to identify changes in the data. Consequently, not all partitions will be processed until a change occurs within those partitions.


Natarajan_M_0-1773232140154.png


thanks ,
If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster

Hi @Natarajan_M ,

I've implemented a last 60 months incremental refresh with data detect changes on my dataset. However, with Power BI Premium Per User (PPU) license, the dataset is not refreshing completely and I'm encountering timeout errors.

Could you please help me resolve this issue?

Thanks!

Hi @automation ,

After you publish the PBIX file, the first load will involve a complete refresh, meaning that all partitions will be processed. Please verify whether the size of the semantic model exceeds the capacity limits.


To overcome the size issue you can follow the below approach : 

define a parameter as LoadAllData 

Natarajan_M_0-1773836983165.png

 

 

 


In Power Query step 

Let
....
SampleData = Table.FirstN(Source, 10),
Check = if LoadAllData then Source else SampleData

in
Check




Keep the default value as False and publish the model to service and do the first refresh .

The first refresh will take care of creating all the partitions in the servivce , now set the LoadAllData to True and process the partitions one by one using ssms or fabric notebook.

 

 

Thanks

 

If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster

 

Hi @automation,

This behavior is expected when using a large incremental refresh window with Detect data changes on a Power BI Premium Per User (PPU) license. With a last 60 months refresh policy enabled, the service must evaluate every partition within that window during each refresh to check for changes. Even though Detect data changes helps avoid unnecessary reloads, Power BI still issues polling queries per partition, and if many partitions are involved, this can quickly exceed the execution and resource limits of PPU, resulting in refresh timeouts.

It’s also important to note that Detect data changes operates at the partition level, not at the individual row level. If a single row changes within a large partition, the entire partition needs to be reprocessed, which further increases refresh cost and duration. This becomes especially noticeable when partitions span long periods such as months or years.

To mitigate this, you can try reducing the incremental refresh window (for example, from 60 months to a smaller range that aligns with how often historical data actually changes), and ensure that the column used for RangeStart/RangeEnd and Detect data changes supports full query folding back to the source. Proper partitioning and folding are critical to keeping refreshes efficient.

 

 

Configure incremental refresh and real-time data for Power BI semantic models - Power BI | Microsoft...

Advanced Incremental Refresh and Real-Time Data With the XMLA Endpoint in Power BI - Power BI | Micr...

 

 

 

Thanks,

prashanth


Helpful resources

Announcements
New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Power BI DataViz World Championships carousel

Power BI DataViz World Championships - June 2026

A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Power BI Update Carousel

Power BI Community Update - March 2026

Check out the March 2026 Power BI update to learn about new features.