Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
spencer_sa
Super User
Super User

Spark executor runs out of memory

We're getting the following error from one of our spark notebook jobs.
It's handling a fair amount of data and it's currently running on the starter pool (Medium Size, 1-10 nodes, 1 driver).
I'm going to try some experimenting with custom pools (increasing the node size, reducing the node counts) but if someone's had similar experiences or can point to some definitive guides to node memory I'd appreciate some pointers (my Google-fu is pulling a lot of chaff)

spencer_sa_0-1737463910445.png

 

1 ACCEPTED SOLUTION
nilendraFabric
Super User
Super User

Hi @spencer_sa 

 

To address the Spark executor out of memory issue , you can try and test following options:

 

Increase the executor memory using the %%configure magic command in your notebook:

 

%%configure -f
{
"executorMemory": "56g"
}

Set the executor memory overhead to account for non-heap memory usage:

%%configure -f
{
"conf": {
"spark.executor.memoryOverhead": "8g"
}
}

Enable dynamic allocation to automatically adjust the number of executors:

%%configure -f
{
"conf": {
"spark.dynamicAllocation.enabled": "true"
}
}

 

Utilize Fabric’s Monitoring Hub and Spark History Server to identify performance bottlenecks and optimize your Spark jobs

 

Remember to restart your Spark session after making configuration changes. These optimizations should help mitigate executor out-of-memory errors and improve overall Spark job performance in Microsoft Fabric.

 

custom pool creation could be an option as well.

 

 

please see if this resolves your performance issue

 

if this helps please give kudos and accept the solution.

 

thanks

 

View solution in original post

6 REPLIES 6
Wilfredkihara
Frequent Visitor

I am also experincing the same problem especially when saving into a lakehouse delta table.

v-pagayam-msft
Community Support
Community Support

Hi @spencer_sa ,

We haven't heard back from you regarding our last response and wanted to check if your issue has been resolved. If our response addressed your query, please mark it as Accept as Solution and click Yes if you found it helpful.If you have any further questions, feel free to reach out.

Thank you for being a part of the Microsoft Fabric Community Forum!

v-pagayam-msft
Community Support
Community Support

Hi @spencer_sa ,
As @nilendraFabric  suggested,I wanted to check in on your situation regarding the issue. Hope the issue has been resolved.If yes, please consider marking the reply that helped you as Accept as Solution and give a 'Kudos' so other members can find iteasily.
Thank you for being a part of the Microsoft Fabric Community Forum!

Regards,
Pallavi.

uselessai_in
Helper II
Helper II

One of the reason could be sudden spike in data, which causes more memory usage. Try checking if your table has any weird history in last few days which has created duplicates within the data or irrelevant data got inserted.

v-pagayam-msft
Community Support
Community Support

Hi @spencer_sa ,
I just wanted to kindly follow up to see if you had a chance to review the previous response provided by community members. I hope it was helpful. If yes, please Accept the answer so that it will be helpful to others to find it quickly.
If still required assistance, feel free to reach out.
Thank you for being a part of Microsoft Fabric Community Forum!

nilendraFabric
Super User
Super User

Hi @spencer_sa 

 

To address the Spark executor out of memory issue , you can try and test following options:

 

Increase the executor memory using the %%configure magic command in your notebook:

 

%%configure -f
{
"executorMemory": "56g"
}

Set the executor memory overhead to account for non-heap memory usage:

%%configure -f
{
"conf": {
"spark.executor.memoryOverhead": "8g"
}
}

Enable dynamic allocation to automatically adjust the number of executors:

%%configure -f
{
"conf": {
"spark.dynamicAllocation.enabled": "true"
}
}

 

Utilize Fabric’s Monitoring Hub and Spark History Server to identify performance bottlenecks and optimize your Spark jobs

 

Remember to restart your Spark session after making configuration changes. These optimizations should help mitigate executor out-of-memory errors and improve overall Spark job performance in Microsoft Fabric.

 

custom pool creation could be an option as well.

 

 

please see if this resolves your performance issue

 

if this helps please give kudos and accept the solution.

 

thanks

 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.