Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
We have a Direct Lake semantic model which we are trying to refresh (reframe), but get the following error. Can anyone explain why it could run out of memory. My understanding was that a reframe simply looked up the metadata for the tables in the model.
We are running F256.
Full error:
{"error":{"code":"ASOperationExceptionError","pbi.error":{"code":"ASOperationExceptionError","parameters":{},"details":[{"code":"ModelingServiceError_Reason","detail":{"type":1,"value":"The operation was throttled by Power BI because of insufficient memory. Please try again later.\r\n"}},{"code":"ModelingServiceError_Location","detail":{"type":1,"value":"ModelingEngineHost"}},{"code":"ModelingServiceError_ExceptionType","detail":{"type":1,"value":"ModelingASOperationException"}},{"code":"ModelingServiceError_Message","detail":{"type":1,"value":"The operation was throttled by Power BI because of insufficient memory. Please try again later.\r\n"}},{"code":"ModelingServiceError_UserErrorCategory","detail":{"type":1,"value":"Unknown"}},{"code":"ModelingServiceError_AdditionalErrorCode","detail":{"type":1,"value":"PFE_PBIDEDICATED_THROTTLED_OUT_OF_MEMORY"}}],"exceptionCulprit":1}}}
Solved! Go to Solution.
Claude Sonnet 4.5 came back with the answer that worked. It said that Fabric was likely timing out due to memory constraints of a schema discovery operation, and it suggested I manually remove the column that the original refresh was complaining about using Tabular Editor. I did this, found that the developers had removed several more columns, removed them, and that seems to have resolved the original issue.
Hi v-sgandrathi,
As I said in my post on Sunday, Claude Sonnet 4.5 came back with the solution, so no further assistance needed.
Thanks,
Liam
Hi @Liam_McCauley,
I wanted to follow up on our previous suggestions regarding the issue. We would love to hear back from you to ensure we can assist you further.
Thank you.
Hi @Liam_McCauley,
In Direct Lake semantic models, actions like Edit tables - Confirm cause a full schema discovery and validation by the Modeling Engine. If the model still includes columns that were deleted from the underlying Delta tables, Fabric will try to resolve these mismatches, which can use a lot of memory and result in PFE_PBIDEDICATED_THROTTLED_OUT_OF_MEMORY errors, even if overall capacity appears normal.
While running OPTIMIZE and VACUUM can help reduce metadata overhead, the main problem here was outdated column references in the semantic model. Removing these missing columns manually with Tabular Editor limited schema validation and allowed the process to complete.
This highlights that for Direct Lake models, after schema changes like column deletions, it's usually more effective to update the semantic model directly (for example, using Tabular Editor) instead of relying only on the Edit Tables or Confirm options.
Thank you.
Claude Sonnet 4.5 came back with the answer that worked. It said that Fabric was likely timing out due to memory constraints of a schema discovery operation, and it suggested I manually remove the column that the original refresh was complaining about using Tabular Editor. I did this, found that the developers had removed several more columns, removed them, and that seems to have resolved the original issue.
I've just realised that I mis-reported what I was doing when I got the error: instead of a refresh, I was actually doing "Edit tables", "Confirm", so that the model would recognise a change to one of the referenced tables (a column was removed, which naturally was causing the refresh to fail).
I have kept an eye on the capacity usage, and it has stayed around 55%, and certainly never neared 80%+.
I ran OPTIMIZE and VACUUM overnight (it took about 10 hours).
I tried to edit the Semantic model this morning ("Edit tables", "Confirm"), but get exactly the same error.
As mentioned by @Anusha66, the problem occurs with Fabric Capacity Usage. Check the Fabric Capacity Metrics App to See how close you are to exceeding the limit.
Thanks Anusha.
The capacity utilisation is showing around 55%, so that looks fine.
I'll check out the files, though and perform maintenance on them.
Thanks,
Liam
Direct Lake refresh doesn’t load data, but it still scans all parquet metadata.
If the Delta table has thousands of files or a large _delta_log, the Modeling Engine can hit memory limits. Run OPTIMIZE, VACUUM, or reduce small files to fix it.
You can also check what other activies are running at the same time. Would suggest to keep alerts on your capacity usage ?(ex->80 % CPU Usage), this gives us prior info.
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
| User | Count |
|---|---|
| 13 | |
| 7 | |
| 5 | |
| 4 | |
| 4 |
| User | Count |
|---|---|
| 23 | |
| 22 | |
| 13 | |
| 12 | |
| 10 |