Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

View all the Fabric Data Days sessions on demand. View schedule

Reply
Anonymous
Not applicable

Semantic Models - Refresh Issues

Hello Everyone,

I’m struggling to understand how refresh works with Semantic Models in Fabric.

My setup:

  • I have two identical Semantic Models of type Direct Lake (Automatic).

  • I also duplicated the same Report, so each report connects to one of these models.

  • I’m on Fabric Capacity F128, both for the workspace where the Semantic Models are stored and the workspace where the Reports live.

The only difference between the two models:

  • On one of them, I performed an on-demand refresh (successful).

  • On the other one, I did not.

What happens:

  • When I open the report connected to the model without refresh, the visuals load and data is returned normally.

  • When I open the report connected to the model with refresh, I immediately get this error:

Error fetching data for this visual
You have reached the maximum allowable memory allocation for your tier. Consider upgrading to a tier with more available memory.

My question:

  • Why does this happen?

  • My assumption was that the report without refresh is using DirectQuery to fetch data (please correct me if I’m wrong).

  • But for the refreshed model, since it’s Direct Lake, why does it fail with a memory error instead of falling back to DirectQuery?

Any guidance or explanation would be really appreciated!

Thank you , 
Dimitra

2 ACCEPTED SOLUTIONS
collinq
Super User
Super User

HI @Anonymous ,

A Direct Lake model is "always" showing the latest and greatest data and does not need refresh to update the data.  A Direct lake skips the import or direct query mode and gets the data directly from the tables in the One Lake.

But, when you manually refresh a Direct Lake, it takes all the in-memory cached data and it reloads it.  This can potentially cause a significant memory spike.  This is particularly if you are running other items at the same time.

 

When you didn't refresh it (and "keep your Direct Lake data up to date" is enabled then it will just be refreshed.

 

You could also try to monitor what else might be happening by looking at your memory usage like in DAX Studio.




Did I answer your question? Mark my post as a solution!

Proud to be a Datanaut!
Private message me for consulting or training needs.




View solution in original post

v-agajavelly
Community Support
Community Support

Hi @Anonymous ,

Thanks @tayloramy  and @collinq  for the detailed explanations.

@Anonymous to recap the key points that might clarify the difference you’re seeing.

  • Direct Lake doesn’t actually need a manual refresh, it reads the latest parquet/Delta files in OneLake by default. That’s why the model you didn’t refresh still works fine.
  • When you trigger a manual refresh on a Direct Lake model, the engine will force a re-load of the required data into the in-memory VertiPaq cache. That reload process can be heavy, and if the query/visuals being rendered demand more column data than your F128 tier can allocate for that operation, you’ll hit the “maximum memory allocation” guardrail instead of falling back to DirectQuery Direct Lake has no DirectQuery fallback.
  • The report without refresh is still serving queries in Direct Lake mode, but only “transcoding” the minimum columns needed on-demand. The one you refreshed is trying to re-hydrate more data into memory at once, hence the spike/error.

Just Check bellow things.

  • Check the size of the model and memory limits for F128 in the Microsoft docs.
  • Use DAX Studio or Fabric metrics to monitor memory usage during refresh to see exactly what’s happening.
  • Avoid manual refresh unless you have a specific reason  in Direct Lake it’s not needed under normal circumstances.

Thanks,
Akhil.

View solution in original post

6 REPLIES 6
v-agajavelly
Community Support
Community Support

Hi @Anonymous ,

I hope the response provided helped in resolving the issue. If you still have any questions, please let us know we are happy to address.

Thanks,
Akhil.

v-agajavelly
Community Support
Community Support

Hi @Anonymous ,

Just checking back in were you able to review the model size and memory usage during refresh? If you’re still hitting the same issue, could you share those details so we can dig a bit deeper into what’s causing the guardrail to trigger?

Thanks,
Akhil.

v-agajavelly
Community Support
Community Support

Hi @Anonymous ,

Just following up to see if the points shared earlier helped clarify the behavior you were seeing with Direct Lake. Were you able to check the model size and memory usage during refresh as suggested?

This usually explains why the refreshed model hit the memory guardrail while the unrefreshed one worked fine.

Thanks,
Akhil.

v-agajavelly
Community Support
Community Support

Hi @Anonymous ,

Thanks @tayloramy  and @collinq  for the detailed explanations.

@Anonymous to recap the key points that might clarify the difference you’re seeing.

  • Direct Lake doesn’t actually need a manual refresh, it reads the latest parquet/Delta files in OneLake by default. That’s why the model you didn’t refresh still works fine.
  • When you trigger a manual refresh on a Direct Lake model, the engine will force a re-load of the required data into the in-memory VertiPaq cache. That reload process can be heavy, and if the query/visuals being rendered demand more column data than your F128 tier can allocate for that operation, you’ll hit the “maximum memory allocation” guardrail instead of falling back to DirectQuery Direct Lake has no DirectQuery fallback.
  • The report without refresh is still serving queries in Direct Lake mode, but only “transcoding” the minimum columns needed on-demand. The one you refreshed is trying to re-hydrate more data into memory at once, hence the spike/error.

Just Check bellow things.

  • Check the size of the model and memory limits for F128 in the Microsoft docs.
  • Use DAX Studio or Fabric metrics to monitor memory usage during refresh to see exactly what’s happening.
  • Avoid manual refresh unless you have a specific reason  in Direct Lake it’s not needed under normal circumstances.

Thanks,
Akhil.

collinq
Super User
Super User

HI @Anonymous ,

A Direct Lake model is "always" showing the latest and greatest data and does not need refresh to update the data.  A Direct lake skips the import or direct query mode and gets the data directly from the tables in the One Lake.

But, when you manually refresh a Direct Lake, it takes all the in-memory cached data and it reloads it.  This can potentially cause a significant memory spike.  This is particularly if you are running other items at the same time.

 

When you didn't refresh it (and "keep your Direct Lake data up to date" is enabled then it will just be refreshed.

 

You could also try to monitor what else might be happening by looking at your memory usage like in DAX Studio.




Did I answer your question? Mark my post as a solution!

Proud to be a Datanaut!
Private message me for consulting or training needs.




tayloramy
Community Champion
Community Champion

Hi @Anonymous, 

 

How big are your models? 

 

When you hit “You have reached the maximum allowable memory allocation for your tier”, it means the query/visual tried to load more column data into memory than your capacity allows for that operation. In Direct Lake, the engine must pull the needed columns from Delta files into VertiPaq memory (“transcoding”). If, at that moment, memory needed exceeds the guardrails, the visual errors. See Microsoft’s explanation of Direct Lake “framing” vs. Import refresh and how Direct Lake loads only what a query needs into memory, not the whole model https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-overview.

 

Chris Webb also documents the per-query memory guardrail behind these errors https://blog.crossjoin.co.uk/2024/06/23/power-bi-semantic-model-memory-errors-part-4-the-query-memory-limit/


See Micorsoft docs for the amount of memory allowed for semantic models on each F SKU: 
What is Power BI Premium? - Microsoft Fabric | Microsoft Learn

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors