Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.

Reply
amir_mm
Helper I
Helper I

Dataset Refresh Challenges with Incremental Refresh

Hello,

 

For the past two months, I've been trying to resolve the refresh schedule for our dataset, and I've previously asked some questions in this forum. Despite trying so many solutions, I'm still facing failures. I'm hopeful that someone can provide insight into why these issues persist.

 

We have a premium embedded plan with A2 sku which comes with 5GM RAM.

We need to refresh the dataset every 30 minutes because of business requirement. 

We have 2 large tables, one with 9 M rows and the other with 8 M rows.

 

What I have done: 

1- Removed old data (Changed DB from 5 years to 3 years data)

2- Removed any unused columns and relationships.

3- Transferred some calculated columns (which looked complex) to power query and SQL server.

4- Turned off the MDX for large tables

5- Applied incremental refresh

 

At the beggining (before these changes):

.pbix dataset size: 820 MB

Total Size in memory: 1.95 GB (based on VertiPaq Analyzer)

 

After making the changes: 

.pbix dataset size: 440 MB

Total Size in memory: 1.07 GB 

But we have 5 GB RAM in our plan, why should a refresh fail?

 

After all this effort, I still face a failure rate of 20-25% during data refreshes.

Regarding the incremental refresh, initially, I defined the policy to refresh the past 12 months and archive the entire dataset for the two largest tables. But, this approach resulted in 20 failed refresh attempts within a 40-hour timeframe.

Then, I changed the policy to refresh the last 6 months of data instead of 12. After checking the VertiPaq Analyzer, I noticed that the total memory footprint post-refresh remained unchanged, with only a marginal reduction in refresh time (excluding the failed ones). And it's interesting that during the first 18-hour following this adjustment, there were no failures. But, subsequently, despite no changes in the system and minimal client activity, 10 refresh failures happened out of 25.

Basically, this incremental refresh is not helping in reduing memory during a refresh!

 

Any idea or comment is appreciated!

 

 

3 REPLIES 3
3CloudThomas
Super User
Super User

At this point, with all the suggestions you have tried, you best option is to open a support ticket with Microsoft and get them to investigate the issue.

After re-reading your post, you have doen all you can on your side to assist with the issue. The only other thing I can see is if the SQL Server database is delaying the data (sql server blocking, bad query plan, etc.). That would cause maybe a timeout on the import side in the semantic model.

3CloudThomas
Super User
Super User

Try connecting in SQL Server Management Studio and refresh one partition at a time:

3CloudThomas_0-1714138537456.png3CloudThomas_1-1714138573816.png

 



Thank you @3CloudThomas 

Yes, I have done that, and each partition (in tables where incremental refresh is defined) is refreshed successfully. But, we still need to schedule a refresh every 30 minutes

Helpful resources

Announcements
Microsoft Fabric Learn Together

Microsoft Fabric Learn Together

Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City

PBI_APRIL_CAROUSEL1

Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors
Top Kudoed Authors