Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.

Reply
spencer_sa
Super User
Super User

Spark executor runs out of memory

We're getting the following error from one of our spark notebook jobs.
It's handling a fair amount of data and it's currently running on the starter pool (Medium Size, 1-10 nodes, 1 driver).
I'm going to try some experimenting with custom pools (increasing the node size, reducing the node counts) but if someone's had similar experiences or can point to some definitive guides to node memory I'd appreciate some pointers (my Google-fu is pulling a lot of chaff)

spencer_sa_0-1737463910445.png

 

1 ACCEPTED SOLUTION
nilendraFabric
Super User
Super User

Hi @spencer_sa 

 

To address the Spark executor out of memory issue , you can try and test following options:

 

Increase the executor memory using the %%configure magic command in your notebook:

 

%%configure -f
{
"executorMemory": "56g"
}

Set the executor memory overhead to account for non-heap memory usage:

%%configure -f
{
"conf": {
"spark.executor.memoryOverhead": "8g"
}
}

Enable dynamic allocation to automatically adjust the number of executors:

%%configure -f
{
"conf": {
"spark.dynamicAllocation.enabled": "true"
}
}

 

Utilize Fabric’s Monitoring Hub and Spark History Server to identify performance bottlenecks and optimize your Spark jobs

 

Remember to restart your Spark session after making configuration changes. These optimizations should help mitigate executor out-of-memory errors and improve overall Spark job performance in Microsoft Fabric.

 

custom pool creation could be an option as well.

 

 

please see if this resolves your performance issue

 

if this helps please give kudos and accept the solution.

 

thanks

 

View solution in original post

6 REPLIES 6
Wilfredkihara
Frequent Visitor

I am also experincing the same problem especially when saving into a lakehouse delta table.

Anonymous
Not applicable

Hi @spencer_sa ,

We haven't heard back from you regarding our last response and wanted to check if your issue has been resolved. If our response addressed your query, please mark it as Accept as Solution and click Yes if you found it helpful.If you have any further questions, feel free to reach out.

Thank you for being a part of the Microsoft Fabric Community Forum!

Anonymous
Not applicable

Hi @spencer_sa ,
As @nilendraFabric  suggested,I wanted to check in on your situation regarding the issue. Hope the issue has been resolved.If yes, please consider marking the reply that helped you as Accept as Solution and give a 'Kudos' so other members can find iteasily.
Thank you for being a part of the Microsoft Fabric Community Forum!

Regards,
Pallavi.

uselessai_in
Helper II
Helper II

One of the reason could be sudden spike in data, which causes more memory usage. Try checking if your table has any weird history in last few days which has created duplicates within the data or irrelevant data got inserted.

Anonymous
Not applicable

Hi @spencer_sa ,
I just wanted to kindly follow up to see if you had a chance to review the previous response provided by community members. I hope it was helpful. If yes, please Accept the answer so that it will be helpful to others to find it quickly.
If still required assistance, feel free to reach out.
Thank you for being a part of Microsoft Fabric Community Forum!

nilendraFabric
Super User
Super User

Hi @spencer_sa 

 

To address the Spark executor out of memory issue , you can try and test following options:

 

Increase the executor memory using the %%configure magic command in your notebook:

 

%%configure -f
{
"executorMemory": "56g"
}

Set the executor memory overhead to account for non-heap memory usage:

%%configure -f
{
"conf": {
"spark.executor.memoryOverhead": "8g"
}
}

Enable dynamic allocation to automatically adjust the number of executors:

%%configure -f
{
"conf": {
"spark.dynamicAllocation.enabled": "true"
}
}

 

Utilize Fabric’s Monitoring Hub and Spark History Server to identify performance bottlenecks and optimize your Spark jobs

 

Remember to restart your Spark session after making configuration changes. These optimizations should help mitigate executor out-of-memory errors and improve overall Spark job performance in Microsoft Fabric.

 

custom pool creation could be an option as well.

 

 

please see if this resolves your performance issue

 

if this helps please give kudos and accept the solution.

 

thanks

 

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors