Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Reply
Liam_McCauley
Advocate I
Advocate I

Can't refresh (reframe) semantic model because of error: PFE_PBIDEDICATED_THROTTLED_OUT_OF_MEMORY

We have a Direct Lake semantic model which we are trying to refresh (reframe), but get the following error.  Can anyone explain why it could run out of memory.  My understanding was that a reframe simply looked up the metadata for the tables in the model.

We are running F256.

 

Full error:

 

{"error":{"code":"ASOperationExceptionError","pbi.error":{"code":"ASOperationExceptionError","parameters":{},"details":[{"code":"ModelingServiceError_Reason","detail":{"type":1,"value":"The operation was throttled by Power BI because of insufficient memory. Please try again later.\r\n"}},{"code":"ModelingServiceError_Location","detail":{"type":1,"value":"ModelingEngineHost"}},{"code":"ModelingServiceError_ExceptionType","detail":{"type":1,"value":"ModelingASOperationException"}},{"code":"ModelingServiceError_Message","detail":{"type":1,"value":"The operation was throttled by Power BI because of insufficient memory. Please try again later.\r\n"}},{"code":"ModelingServiceError_UserErrorCategory","detail":{"type":1,"value":"Unknown"}},{"code":"ModelingServiceError_AdditionalErrorCode","detail":{"type":1,"value":"PFE_PBIDEDICATED_THROTTLED_OUT_OF_MEMORY"}}],"exceptionCulprit":1}}}

 

1 ACCEPTED SOLUTION
Liam_McCauley
Advocate I
Advocate I

Claude Sonnet 4.5 came back with the answer that worked.  It said that Fabric was likely timing out due to memory constraints of a schema discovery operation, and it suggested I manually remove the column that the original refresh was complaining about using Tabular Editor.  I did this, found that the developers had removed several more columns, removed them, and that seems to have resolved the original issue.

View solution in original post

8 REPLIES 8
Liam_McCauley
Advocate I
Advocate I

Hi v-sgandrathi,

 

As I said in my post on Sunday, Claude Sonnet 4.5 came back with the solution, so no further assistance needed.

 

Thanks,

Liam

v-sgandrathi
Community Support
Community Support

Hi @Liam_McCauley,

 

I wanted to follow up on our previous suggestions regarding the issue. We would love to hear back from you to ensure we can assist you further.

 

Thank you.

v-sgandrathi
Community Support
Community Support

Hi @Liam_McCauley,

 

In Direct Lake semantic models, actions like Edit tables - Confirm cause a full schema discovery and validation by the Modeling Engine. If the model still includes columns that were deleted from the underlying Delta tables, Fabric will try to resolve these mismatches, which can use a lot of memory and result in PFE_PBIDEDICATED_THROTTLED_OUT_OF_MEMORY errors, even if overall capacity appears normal.

While running OPTIMIZE and VACUUM can help reduce metadata overhead, the main problem here was outdated column references in the semantic model. Removing these missing columns manually with Tabular Editor limited schema validation and allowed the process to complete.

This highlights that for Direct Lake models, after schema changes like column deletions, it's usually more effective to update the semantic model directly (for example, using Tabular Editor) instead of relying only on the Edit Tables or Confirm options.

 

Thank you.

Liam_McCauley
Advocate I
Advocate I

Claude Sonnet 4.5 came back with the answer that worked.  It said that Fabric was likely timing out due to memory constraints of a schema discovery operation, and it suggested I manually remove the column that the original refresh was complaining about using Tabular Editor.  I did this, found that the developers had removed several more columns, removed them, and that seems to have resolved the original issue.

Liam_McCauley
Advocate I
Advocate I

I've just realised that I mis-reported what I was doing when I got the error: instead of a refresh, I was actually doing "Edit tables", "Confirm", so that the model would recognise a change to one of the referenced tables (a column was removed, which naturally was causing the refresh to fail).

 

I have kept an eye on the capacity usage, and it has stayed around 55%, and certainly never neared 80%+.

 

I ran OPTIMIZE and VACUUM overnight (it took about 10 hours).

 

I tried to edit the Semantic model this morning ("Edit tables", "Confirm"), but get exactly the same error.

Kejmil
Regular Visitor

As mentioned by @Anusha66, the problem occurs with Fabric Capacity Usage. Check the Fabric Capacity Metrics App to See how close you are to exceeding the limit.

Liam_McCauley
Advocate I
Advocate I

Thanks Anusha.

 

The capacity utilisation is showing around 55%, so that looks fine.

 

I'll check out the files, though and perform maintenance on them.

 

Thanks,

Liam

Anusha66
Advocate III
Advocate III

Direct Lake refresh doesn’t load data, but it still scans all parquet metadata.
If the Delta table has thousands of files or a large _delta_log, the Modeling Engine can hit memory limits. Run OPTIMIZE, VACUUM, or reduce small files to fix it.


You can also check what other activies are running at the same time. Would suggest to keep alerts on your capacity usage ?(ex->80 % CPU Usage), this  gives us prior info.

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.