Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Hello Everyone,
I’m struggling to understand how refresh works with Semantic Models in Fabric.
My setup:
I have two identical Semantic Models of type Direct Lake (Automatic).
I also duplicated the same Report, so each report connects to one of these models.
I’m on Fabric Capacity F128, both for the workspace where the Semantic Models are stored and the workspace where the Reports live.
The only difference between the two models:
On one of them, I performed an on-demand refresh (successful).
On the other one, I did not.
What happens:
When I open the report connected to the model without refresh, the visuals load and data is returned normally.
When I open the report connected to the model with refresh, I immediately get this error:
Error fetching data for this visual
You have reached the maximum allowable memory allocation for your tier. Consider upgrading to a tier with more available memory.
My question:
Why does this happen?
My assumption was that the report without refresh is using DirectQuery to fetch data (please correct me if I’m wrong).
But for the refreshed model, since it’s Direct Lake, why does it fail with a memory error instead of falling back to DirectQuery?
Any guidance or explanation would be really appreciated!
Thank you ,
Dimitra
Solved! Go to Solution.
HI @Anonymous ,
A Direct Lake model is "always" showing the latest and greatest data and does not need refresh to update the data. A Direct lake skips the import or direct query mode and gets the data directly from the tables in the One Lake.
But, when you manually refresh a Direct Lake, it takes all the in-memory cached data and it reloads it. This can potentially cause a significant memory spike. This is particularly if you are running other items at the same time.
When you didn't refresh it (and "keep your Direct Lake data up to date" is enabled then it will just be refreshed.
You could also try to monitor what else might be happening by looking at your memory usage like in DAX Studio.
Proud to be a Datanaut!
Private message me for consulting or training needs.
Hi @Anonymous ,
Thanks @tayloramy and @collinq for the detailed explanations.
@Anonymous to recap the key points that might clarify the difference you’re seeing.
Just Check bellow things.
Thanks,
Akhil.
Hi @Anonymous ,
I hope the response provided helped in resolving the issue. If you still have any questions, please let us know we are happy to address.
Thanks,
Akhil.
Hi @Anonymous ,
Just checking back in were you able to review the model size and memory usage during refresh? If you’re still hitting the same issue, could you share those details so we can dig a bit deeper into what’s causing the guardrail to trigger?
Thanks,
Akhil.
Hi @Anonymous ,
Just following up to see if the points shared earlier helped clarify the behavior you were seeing with Direct Lake. Were you able to check the model size and memory usage during refresh as suggested?
This usually explains why the refreshed model hit the memory guardrail while the unrefreshed one worked fine.
Thanks,
Akhil.
Hi @Anonymous ,
Thanks @tayloramy and @collinq for the detailed explanations.
@Anonymous to recap the key points that might clarify the difference you’re seeing.
Just Check bellow things.
Thanks,
Akhil.
HI @Anonymous ,
A Direct Lake model is "always" showing the latest and greatest data and does not need refresh to update the data. A Direct lake skips the import or direct query mode and gets the data directly from the tables in the One Lake.
But, when you manually refresh a Direct Lake, it takes all the in-memory cached data and it reloads it. This can potentially cause a significant memory spike. This is particularly if you are running other items at the same time.
When you didn't refresh it (and "keep your Direct Lake data up to date" is enabled then it will just be refreshed.
You could also try to monitor what else might be happening by looking at your memory usage like in DAX Studio.
Proud to be a Datanaut!
Private message me for consulting or training needs.
Hi @Anonymous,
How big are your models?
When you hit “You have reached the maximum allowable memory allocation for your tier”, it means the query/visual tried to load more column data into memory than your capacity allows for that operation. In Direct Lake, the engine must pull the needed columns from Delta files into VertiPaq memory (“transcoding”). If, at that moment, memory needed exceeds the guardrails, the visual errors. See Microsoft’s explanation of Direct Lake “framing” vs. Import refresh and how Direct Lake loads only what a query needs into memory, not the whole model https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-overview.
Chris Webb also documents the per-query memory guardrail behind these errors https://blog.crossjoin.co.uk/2024/06/23/power-bi-semantic-model-memory-errors-part-4-the-query-memory-limit/
See Micorsoft docs for the amount of memory allowed for semantic models on each F SKU:
What is Power BI Premium? - Microsoft Fabric | Microsoft Learn
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!