The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Solved! Go to Solution.
Hi @um4ndr ,
This error usually happens when the Livy session gets stale or reset while your notebook is still trying to send a command. That’s why you’re seeing:
InvalidHttpRequestToLivy: Submission failed due to error content =["Statement not found"]
spark.sparkContext._jsc.sc().isStopped()
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
Hi @um4ndr ,
Thanks for reaching out to the Microsoft fabric community forum.
Livy sessions and their statements are not persistent forever.
If your notebook is idle or takes too long to execute a cell, the statement might expire or be cleaned up before it's accessed again.
Network Latency or Disruption:
A brief network glitch between your environment and the Livy server might cause this.
Race Condition or Resource Unavailability:If many users/notebooks are sharing a Spark pool or compute cluster, Livy might not process requests reliably under load.
If your notebook is connecting to a Spark pool that was stopped or in cold start, the first requests might fail before the backend is ready.
workaround:
As you already noticed, waiting and retrying later worked this supports the idea of a temporary availability or latency issue.
Suggested fixes
Increase Livy timeout settings (if configurable):
Look for options like livy.server.session.timeout and livy.server.statement.timeout (in Azure or Fabric, these might be set via notebook/session config).
Ensure retries are not too fast:
If your retry mechanism is immediate, try adding a delay (e.g., exponential backoff) between retries.
Use smaller, quicker Spark jobs for first cell:
Add a small warm-up job (e.g., spark.range(10).count()) at the beginning to keep the session active and ensure cluster readiness.
Cluster autoscaling or pool warm-up:
If using Azure Synapse / Fabric, make sure your Spark pool is kept warm if needed, or pre-allocate resources to reduce cold start delays.
Check backend service health (Livy logs / Fabric logs):
If this is a recurring issue, consider looking into cluster logs or raising a ticket with Microsoft if you're using Fabric.
Run a lightweight job to keep session active
spark.range(1, 10).count()
Handle Livy Errors on Apache Spark in Synapse - Azure Synapse Analytics | Microsoft Learn
Spark Livy Session Timeout Issue - Microsoft Q&A
Submit Spark session jobs using the Livy API - Microsoft Fabric | Microsoft Learn
If this post helped resolve your issue, please consider giving it Kudos and marking it as the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.
We appreciate your engagement and thank you for being an active part of the community.
Best regards,
LakshmiNarayana.
Hello burakkaragoz,
After analyzing the proposed solutions, was chosed the following solution "Increase Livy timeout settings" but due the fact that I'm not using default environment I'll need to make the necessary changes there, but i'm really understand wicht of them it suits me better.
By the way, this problem has not returned yet 🙂
Hi @um4ndr ,
If your issue has been resolved, please consider marking the most helpful reply as the accepted solution. This helps other community members who may encounter the same issue to find answers more efficiently.
If you're still facing challenges, feel free to let us know—we’ll be glad to assist you further.
Looking forward to your response.
Best regards,
LakshmiNarayana.
Hi @um4ndr ,
If your issue has been resolved, please mark the most helpful reply as the Accepted Solution to close the thread. This helps ensure the discussion remains useful for other community members.
Thank you for your attention, and we look forward to your confirmation.
Best regards,
LakshmiNarayana
Hi @um4ndr ,
As we haven't heard back from you, we are closing this thread. If you are still experiencing the same issue, we kindly request you to create a new thread we’ll be happy to assist you further.
Thank you for your patience and support.
If our response was helpful, please mark it as Accepted as Solution.
Feel free to reach out if you need any further assistance.
Best Regards,
Lakshmi Narayana
Hi @um4ndr ,
Thanks for reaching out to the Microsoft fabric community forum.
Livy sessions and their statements are not persistent forever.
If your notebook is idle or takes too long to execute a cell, the statement might expire or be cleaned up before it's accessed again.
Network Latency or Disruption:
A brief network glitch between your environment and the Livy server might cause this.
Race Condition or Resource Unavailability:If many users/notebooks are sharing a Spark pool or compute cluster, Livy might not process requests reliably under load.
If your notebook is connecting to a Spark pool that was stopped or in cold start, the first requests might fail before the backend is ready.
workaround:
As you already noticed, waiting and retrying later worked this supports the idea of a temporary availability or latency issue.
Suggested fixes
Increase Livy timeout settings (if configurable):
Look for options like livy.server.session.timeout and livy.server.statement.timeout (in Azure or Fabric, these might be set via notebook/session config).
Ensure retries are not too fast:
If your retry mechanism is immediate, try adding a delay (e.g., exponential backoff) between retries.
Use smaller, quicker Spark jobs for first cell:
Add a small warm-up job (e.g., spark.range(10).count()) at the beginning to keep the session active and ensure cluster readiness.
Cluster autoscaling or pool warm-up:
If using Azure Synapse / Fabric, make sure your Spark pool is kept warm if needed, or pre-allocate resources to reduce cold start delays.
Check backend service health (Livy logs / Fabric logs):
If this is a recurring issue, consider looking into cluster logs or raising a ticket with Microsoft if you're using Fabric.
Run a lightweight job to keep session active
spark.range(1, 10).count()
Handle Livy Errors on Apache Spark in Synapse - Azure Synapse Analytics | Microsoft Learn
Spark Livy Session Timeout Issue - Microsoft Q&A
Submit Spark session jobs using the Livy API - Microsoft Fabric | Microsoft Learn
If this post helped resolve your issue, please consider giving it Kudos and marking it as the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.
We appreciate your engagement and thank you for being an active part of the community.
Best regards,
LakshmiNarayana.
Thanks for the quick reply. I'll try.
Hey @um4ndr
Sounds good – give it a shot and let me know how it goes!
If it still acts up, feel free to drop the exact error or behavior here and I’ll help you troubleshoot further.
We’ll get it working one way or another 💪
Hi @um4ndr ,
This error usually happens when the Livy session gets stale or reset while your notebook is still trying to send a command. That’s why you’re seeing:
InvalidHttpRequestToLivy: Submission failed due to error content =["Statement not found"]
spark.sparkContext._jsc.sc().isStopped()
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
User | Count |
---|---|
19 | |
10 | |
6 | |
3 | |
3 |
User | Count |
---|---|
48 | |
24 | |
17 | |
12 | |
12 |