Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Reply
Riktastic
Frequent Visitor

Livy session has failed. Session state: Dead,

Hi Fabricators!

 

Our daily run gets sometimes hit by the following error while trying to run mssparkutils.notebook.runMultiple():

 

Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - InvalidHttpRequestToLivy, Error value - Submission failed due to error content =["requirement failed: Session isn't active."] HTTP status code: 400. Trace ID: 321c0c00-5823-4047-afb2-b9990fea8b923.' :

(PS: there is no additional logging).

 

Sometimes we run into:

Spark_User_AutoClassification_attempt_Diagnostics: Livy session has failed. Session state: Dead, Error code: Spark_User_AutoClassification_attempt_Diagnostics. Job failed during run time with state=[dead]. Source: User.

 

Our setup is currently: Data pipeline, which invokes a notebook (using the notebook-activity), the notebook runs multiple other notebooks.

We have currently modified our data pipeline to run the notebook again if it fails. The second time it works perfectly.

We have checked our notebooks, none of them contain code to create nor stop a Sparksession. While googling found some Spark related solutions but none of them seem to apply to MS Fabric.

We are currently on runtime 1.2 and have tried 1.3, but it didn't fix this issue. But sadly, it doesn't.

 

Anyone else experiencing this issue, or has dealt with a similar situation before?

 

PS: It is really a bummer as it costs us a lot of capacity.

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @Riktastic ,
This error usually means the Spark session was not fully active when mssparkutils.notebook.runMultiple() was triggered.This might due to idle timeouts, especially when the pipeline runs after a period of inactivity.To reduce failures, you may try adding a small command like spark.range(1) at the start of the parent notebook to initialize the session, and if you are launching several notebooks at once, consider staggering them slightly. Also, reviewing your Spark pool’s min/max settings may help reduce session startup delays.
Please refer the links below for detailed infromation:
https://learn.microsoft.com/en-us/fabric/data-engineering/microsoft-spark-utilities
https://learn.microsoft.com/en-us/fabric/data-engineering/get-started-api-livy-session 


I hope this resolve your query.If so,give us kudos and consider accepting it as solution.

Regards,
Pallavi.

View solution in original post

4 REPLIES 4
Anonymous
Not applicable

Hi @Riktastic ,
I wanted to check in on your situation regarding the issue. Have you resolved it? If you have, please consider marking the reply that helped you or sharing your solution. It would be greatly appreciated by others in the community who may have the same question
Thank you


Anonymous
Not applicable

Hi @Riktastic ,
Following up to check whether you got a chance to review the suggestion given.If it helps,consider accepting it as solution,it will be helpful for other members of the community who have similar problems as yours to solve it faster. Glad to help.
Thank you.

Anonymous
Not applicable

Hi @Riktastic ,
I wanted to check and see if you had a chance to review our previous message or Please let me know if everything is sorted or if you need any further assistance.If it helps,consider accepting it as solution.
Thank you.

Anonymous
Not applicable

Hi @Riktastic ,
This error usually means the Spark session was not fully active when mssparkutils.notebook.runMultiple() was triggered.This might due to idle timeouts, especially when the pipeline runs after a period of inactivity.To reduce failures, you may try adding a small command like spark.range(1) at the start of the parent notebook to initialize the session, and if you are launching several notebooks at once, consider staggering them slightly. Also, reviewing your Spark pool’s min/max settings may help reduce session startup delays.
Please refer the links below for detailed infromation:
https://learn.microsoft.com/en-us/fabric/data-engineering/microsoft-spark-utilities
https://learn.microsoft.com/en-us/fabric/data-engineering/get-started-api-livy-session 


I hope this resolve your query.If so,give us kudos and consider accepting it as solution.

Regards,
Pallavi.

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.