Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
I have 2 pipelines , which simply run notebooks in my medallion architecture and refresh the semantic model. they used to run fine until yesterday, below is the error:
Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - Exception, Error value - Failed to create Livy session for executing notebook. Error: [SparkSettingsMergeValidationError] Settings operation failed due to a validation error: Code = SparkSettingsComputeExceedsPoolLimit, Message = 'The cores or memory you claimed exceeds the limitation of the selected pool, claimed cores: 48, claimed memory: 336, cores limit: 16, memory limit: 112. Please decrease the num of executors or driver/executor size.' . [Root activity id: 0ce68bee-4f39-49fe-ba28-f18d63bb62ca] HTTP status code: 400.' :
When I tried to manually run the one of the notebook to see the error: it was as following , I am getting this error for the first time.
My notebook’s Spark configuration requested more cores or memory than the selected Lakehouse/Spark pool allows.
also attaching the image of the environment I am using is as
Solved! Go to Solution.
Sounds like another job was running at the same time using the available CUs. Check the capacity metrics report. Glad to hear it's working now.
Sounds like another job was running at the same time using the available CUs. Check the capacity metrics report. Glad to hear it's working now.
Neither the capacity was changed nor the skus were reduced, and the notebook used to run fine earlier , on this rare day , i got these error , now its working fine .
Hi @IAMCS ,
The failure is happening because the notebook is requesting more Spark resources than the pool supports.
Your job requested 48 cores / 336 GB RAM, while the selected pool only allows 16 cores / 112 GB RAM, so Fabric cannot create a Livy session and the pipeline stops before execution.
Reduce Spark session settings (Run → Configure session): lower driver size, executor size, or executor count.
OR use a larger Spark pool if the workload requires more compute.
Also confirm the pipeline activity isn’t overriding the notebook’s Spark settings.
If this explanation helps, please consider giving a kudos 👍 and marking it as the Accepted Solution ✅ so it can help others as well.
Regards,
Shashi Paul
Hi @IAMCS
Has the capacity been changed at all? If the SKU has been reduced or if there are several processes running at the same time, it can cause this error. It can also happen if the workspace has been moved to a new capacity.
Try creating a new spark environment from the dropdown at the top of the notebook
This will ensure the environment is in the correct capacity.
In the setup, go to compute to give yourself more spark driver cores if you want to improve speed of the notebook.
Make sure to check your capacity metrics app, see if it's not just a standard capacity usage issue.
Also, check with colleagues, they may be running an intensive process that you're not aware of.
--------------------------------
I hope this helps, please give kudos and mark as solved if it does!
Connect with me on LinkedIn.
Subscribe to my YouTube channel for Fabric/Power Platform related content!
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!