Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Riktastic
Frequent Visitor

Livy session has failed. Session state: Dead,

Hi Fabricators!

 

Our daily run gets sometimes hit by the following error while trying to run mssparkutils.notebook.runMultiple():

 

Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - InvalidHttpRequestToLivy, Error value - Submission failed due to error content =["requirement failed: Session isn't active."] HTTP status code: 400. Trace ID: 321c0c00-5823-4047-afb2-b9990fea8b923.' :

(PS: there is no additional logging).

 

Sometimes we run into:

Spark_User_AutoClassification_attempt_Diagnostics: Livy session has failed. Session state: Dead, Error code: Spark_User_AutoClassification_attempt_Diagnostics. Job failed during run time with state=[dead]. Source: User.

 

Our setup is currently: Data pipeline, which invokes a notebook (using the notebook-activity), the notebook runs multiple other notebooks.

We have currently modified our data pipeline to run the notebook again if it fails. The second time it works perfectly.

We have checked our notebooks, none of them contain code to create nor stop a Sparksession. While googling found some Spark related solutions but none of them seem to apply to MS Fabric.

We are currently on runtime 1.2 and have tried 1.3, but it didn't fix this issue. But sadly, it doesn't.

 

Anyone else experiencing this issue, or has dealt with a similar situation before?

 

PS: It is really a bummer as it costs us a lot of capacity.

1 ACCEPTED SOLUTION
v-pagayam-msft
Community Support
Community Support

Hi @Riktastic ,
This error usually means the Spark session was not fully active when mssparkutils.notebook.runMultiple() was triggered.This might due to idle timeouts, especially when the pipeline runs after a period of inactivity.To reduce failures, you may try adding a small command like spark.range(1) at the start of the parent notebook to initialize the session, and if you are launching several notebooks at once, consider staggering them slightly. Also, reviewing your Spark pool’s min/max settings may help reduce session startup delays.
Please refer the links below for detailed infromation:
https://learn.microsoft.com/en-us/fabric/data-engineering/microsoft-spark-utilities
https://learn.microsoft.com/en-us/fabric/data-engineering/get-started-api-livy-session 


I hope this resolve your query.If so,give us kudos and consider accepting it as solution.

Regards,
Pallavi.

View solution in original post

4 REPLIES 4
v-pagayam-msft
Community Support
Community Support

Hi @Riktastic ,
I wanted to check in on your situation regarding the issue. Have you resolved it? If you have, please consider marking the reply that helped you or sharing your solution. It would be greatly appreciated by others in the community who may have the same question
Thank you


v-pagayam-msft
Community Support
Community Support

Hi @Riktastic ,
Following up to check whether you got a chance to review the suggestion given.If it helps,consider accepting it as solution,it will be helpful for other members of the community who have similar problems as yours to solve it faster. Glad to help.
Thank you.

v-pagayam-msft
Community Support
Community Support

Hi @Riktastic ,
I wanted to check and see if you had a chance to review our previous message or Please let me know if everything is sorted or if you need any further assistance.If it helps,consider accepting it as solution.
Thank you.

v-pagayam-msft
Community Support
Community Support

Hi @Riktastic ,
This error usually means the Spark session was not fully active when mssparkutils.notebook.runMultiple() was triggered.This might due to idle timeouts, especially when the pipeline runs after a period of inactivity.To reduce failures, you may try adding a small command like spark.range(1) at the start of the parent notebook to initialize the session, and if you are launching several notebooks at once, consider staggering them slightly. Also, reviewing your Spark pool’s min/max settings may help reduce session startup delays.
Please refer the links below for detailed infromation:
https://learn.microsoft.com/en-us/fabric/data-engineering/microsoft-spark-utilities
https://learn.microsoft.com/en-us/fabric/data-engineering/get-started-api-livy-session 


I hope this resolve your query.If so,give us kudos and consider accepting it as solution.

Regards,
Pallavi.

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.