Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Power BI is turning 10! Let’s celebrate together with dataviz contests, interactive sessions, and giveaways. Register now.

Reply
amir_mm
Helper III
Helper III

Incremental refresh and refresh in Analysis services

Hello,

I have defined an incremental refresh policy to refresh the last 6 months and archive the entire data (5 years) to reduce peak memory usage during a refresh. But, I don't see much difference in terms of the number of refresh failures or even the refresh duration. I checked the partitions in Analysis Services, and when a refresh ends successfully, I can see that only the partitions for the last 6 months have been updated (which aligns with my incremental refresh policy). However, when I refreshed the 2 tables with the incremental refresh policy in SSMS, I noticed that the entire dataset had been transferred. Yet, after checking the partitions, only the last 6 months had been processed.

This leads me to the impression that during the refresh, only the last 6 months are updated, but the entire dataset is processed. This could explain why I don't see much difference in terms of memory usage and refresh time.

 

Below, you can see that almost 9.5 million and 8.5 million rows have been transferred for these 2 tables, which represent the entire dataset (5 years of data) :

amir_mm_2-1715371570874.png

 

amir_mm_1-1715371534479.png

 

I would appreciate it if you could help me understand how this works.

Thanks!

 

1 ACCEPTED SOLUTION

When refreshing multiple partition you can influence the level of parallelism.  May want to tone that down a bit, or even go fully sequential.

View solution in original post

5 REPLIES 5
lbendlin
Super User
Super User

Which refresh type did you specify?

Do you have table dependencies (like Auto Date/Time)?  - Check the refresh log to see if it processes other tables when you asked for the processing of a particular table.

I did a "full" refresh, and it processed the entire data but refreshed the last 6 months partitions only.

I noticed something very strange. We have a premium embedded capacity with 2 workspaces, one for production and one for developments (excatly same configurations), and I published a semantic model into these 2 workspaces. Now I'm constantly getting memory capacity error for the production workspace, while all refreshes completed successfully in Dev workspace. I checked the workspaces settings and I can't find any differences. 

I did some research but did not find anything special.

 

Thanks.

When refreshing multiple partition you can influence the level of parallelism.  May want to tone that down a bit, or even go fully sequential.

Thank you! May I know how I can do it fully sequential? As I know, we can change the maximum parallel loading in PBI desktop, but not sure about the full sequential method.

Processing Options and Settings (Analysis Services) | Microsoft Learn

 

Note that I am talking about partitions, not tables.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.