Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
Anonymous
Not applicable

In Fabric Failed to create session for executing notebook

In Fabric, a user has set up a pipeline that includes a notebook, scheduled to run every 15 minutes. The pipeline runs 96 times a day, and it encountered 7 errors. The error message reads: {"ename":"Exception","evalue":"Failed to create session for executing notebook.traceback":["Exception: Failed to create session for executing notebook.SparkCoreError/Other: Livy session has failed. Error code: SparkCoreError/Other. SessionInfo.State from SparkCore is Error: Session acquisition failed due to session post personalization failure. Error code: LM_LibraryManagementPersonalizationStatement_Error Source: System."]},message":"Notebook execution is in Failed state.


Please pay attention to: The pipeline consists of only one activity, which is a notebook. In most cases, the pipeline runs successfully. However, the user needs to understand the reasons behind the 7 failures of the pipeline runs, as this impacts the actual production environment. The user encountered 7 failures out of 96 pipeline runs after modifying the following configurations.

The old configuration: 

Richardzhu_0-1736747311950.png

The new configuration: 

Richardzhu_1-1736747333789.png

 

 

 

Ask: 
1 What is the reason for the 7 pipeline run failures out of the 96 runs in Fabric?

2 Can I reduce the likelihood of pipeline run errors by modifying the settings in the Fabric interface? If possible, how can I make the specific modifications?
3 Could these settings potentially cause the pipeline runs to fail? If so, could you please explain the reasons for the errors?

Here are some screenshot details:

Richardzhu_2-1736749974793.png

Richardzhu_3-1736750079664.png

 

 

1 ACCEPTED SOLUTION
v-achippa
Community Support
Community Support

Hi @Anonymous,

Thank you for reaching out to Microsoft Fabric Community.

 

The error message indicates the pipeline failed due to an issue with creating a Spark session. please follow the below solutions:

  • In the new configuration, the minimum executor instances were increased, this higher demand may exceed the cluster's available resources during peak load times, leading to session initialization failures. Adjust the number of executors to a more manageable range (similar to the old configuration). This reduces the likelihood of cluster resource exhaustion.
  • The higher number of executors combined with 16 cores and 112 GB per executor can exhaust cluster capacity, especially if other workloads are running concurrently. And also driver memory was reduced drastically from 224 GB to 56 GB, while executor memory remains high (112 GB). This imbalance can lead to spark application inefficiencies. So, increase driver memory to match the executor memory more closely.
  • Running the notebook every 15 minutes (96 times per day) puts significant stress on the cluster. Increase the interval (20–30 minutes) between the pipeline runs to reduce cluster load.
  • Ensure all libraries or dependencies required by the notebook are pre-installed and verified to avoid personalization errors.
  • Add a retry mechanism to the pipeline to handle transient failures automatically.

Implement these changes to stabilize the pipeline and reduce the likelihood of failures

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

View solution in original post

4 REPLIES 4
v-achippa
Community Support
Community Support

Hi @Anonymous,

Thank you for reaching out to Microsoft Fabric Community.

 

The error message indicates the pipeline failed due to an issue with creating a Spark session. please follow the below solutions:

  • In the new configuration, the minimum executor instances were increased, this higher demand may exceed the cluster's available resources during peak load times, leading to session initialization failures. Adjust the number of executors to a more manageable range (similar to the old configuration). This reduces the likelihood of cluster resource exhaustion.
  • The higher number of executors combined with 16 cores and 112 GB per executor can exhaust cluster capacity, especially if other workloads are running concurrently. And also driver memory was reduced drastically from 224 GB to 56 GB, while executor memory remains high (112 GB). This imbalance can lead to spark application inefficiencies. So, increase driver memory to match the executor memory more closely.
  • Running the notebook every 15 minutes (96 times per day) puts significant stress on the cluster. Increase the interval (20–30 minutes) between the pipeline runs to reduce cluster load.
  • Ensure all libraries or dependencies required by the notebook are pre-installed and verified to avoid personalization errors.
  • Add a retry mechanism to the pipeline to handle transient failures automatically.

Implement these changes to stabilize the pipeline and reduce the likelihood of failures

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

Hi @Anonymous,

 

Thank you for reaching out to Microsoft Fabric Community.

 

As we haven’t heard back from you, we wanted to kindly follow up to check if the solution I have provided for the issue worked? or let us know if you need any further assistance?
If my response addressed, please mark it as Accept as solution and click Yes if you found it helpful.

 

Regards,

Anjan Kumar Chippa

nilendraFabric
Community Champion
Community Champion

Hi @Anonymous The error message indicates that the failures occurred due to an inability to create a Spark session for executing the notebook. The root cause of these failures is likely related to resource constraints or session management issues within the Spark environment.

Could you please share if for those 7 failures , no others jobs were running the Fabric Enviroment?

Anonymous
Not applicable

Hello @nilendraFabric , in the Fabric environment, there were no other pipeline runs, only a single pipeline containing a Notebook activity was running.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.