Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
gban
Frequent Visitor

Refresh fails with out of memory randomly

Hi, lately my scheduled dataset refreshes started to fail randomly from time to time with error:

The M evaluation exceeded the memory limit. 
To address the issue consider optimizing the M expressions,
reducing the concurrency of operations that are memory intensive or upgrading to increase the available memory.
Container exited unexpectedly with code 0x0000DEAD. PID: 10040.
Used features: (none).;
Container exited unexpectedly with code 0x0000DEAD. PID: 10040.
Used features: (none).
Container exited unexpectedly with code 0x0000DEAD. PID: 10040.
Used features: (none).
Container exited unexpectedly with code 0x0000DEAD. PID: 10040.. Table: xxx.

Now when I look at the memory usage in my premium embedded capacity (A2), it peeks to about half of available memory (for A2 should be available 5GB):

gban_0-1618378666318.png

I can not reproduce the issue refreshing using Power BI desktop, refresh involves < 3M rows (slowly increasing, there's no fluctuation in row count), final dataset size according to DAX studio is a joke ~20MB.

 

Any ideas how to troubleshoot, what capacity settings might need to be tuned?

 

P.S. Cudos to Power BI developers, who use funny container exit code 😄

1 ACCEPTED SOLUTION
lbendlin
Super User
Super User

Sorry to hear about your troubles.  I am certain you'll find the issue soon.

 

Using up all the memory in a capacity is not a good idea  - it would prevent other dataset refreshes, and it would prevent other datasets from loading into memory.

Here's an interesting fact: During refresh a dataset occupies twice the amount of memory in the capacity. The "old"  copy of the dataset serves the users while the "new" copy of the dataset is mashed up. Once the refresh is successful, the new copy replaces the old copy and the memory is released.

That means not only is it a bad idea to use up all the memory, it is also a bad idea to use up more than half the memory...

 

Anyway - I would recommend you do your investigation in baby steps.  Take your Power Query, start it over and do diagnostics every time you add a step. Publish to the workspace/app, run a refresh, and see if you get the funny error message again. Rinse and repeat until you have identified the culprit.  Most likely a cartesian product somewhere between two large tables.

View solution in original post

4 REPLIES 4

Is there any solution to this issue ? I am running with a similar issue. Currently we do not have any data. Much of the data is blank. In desktop, it hardly takes a minute to refresh but in service it runs for long and throws this error. 

lbendlin
Super User
Super User

Sorry to hear about your troubles.  I am certain you'll find the issue soon.

 

Using up all the memory in a capacity is not a good idea  - it would prevent other dataset refreshes, and it would prevent other datasets from loading into memory.

Here's an interesting fact: During refresh a dataset occupies twice the amount of memory in the capacity. The "old"  copy of the dataset serves the users while the "new" copy of the dataset is mashed up. Once the refresh is successful, the new copy replaces the old copy and the memory is released.

That means not only is it a bad idea to use up all the memory, it is also a bad idea to use up more than half the memory...

 

Anyway - I would recommend you do your investigation in baby steps.  Take your Power Query, start it over and do diagnostics every time you add a step. Publish to the workspace/app, run a refresh, and see if you get the funny error message again. Rinse and repeat until you have identified the culprit.  Most likely a cartesian product somewhere between two large tables.

gban
Frequent Visitor

I've used power bi diagnostic recording for whole refresh, but either I did it in wrong way or it did not provide something usefull.

 

It produced diagnostics counters table with memory consumption, but captured timeframe was shorter than whole refresh, not sure why and it fluctated a lot during several attempts.

I've disabled loading in background and loading in parallel, but that did not help.

Basically I've spent on a lot of time without getting something that would direct me straight to the problem. 

 

When diagnosing single step, does it process whole dataset or just some sample - in other words would I see meomry usage similar to one during refresh or just for some sample data and would need to extrapolate that somehow?

 

I have merge operations, and even if it is possible to avoid them (probably be doing that on SQL server with heavy partition over) - that would make the logic more complicated and do a lot of coupling to database.

I don't have append operations. 

 

What I would like to understand, why it does not use all available RAM in capacity?

lbendlin
Super User
Super User

Use the Power Query step diagnostics options to see which steps take up the most memory.  Do you have merge/append operations?

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors