Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
I have a pipeline that runs 4 notebook tasks, using the High concurrency feature for running multiple notebooks. I set the task timeout to "0.02:00:00".
However, since August 1, 2025, I have encountered around 4–5 cases where the Spark session gets stuck in a running state (e.g., 39 hours) and does not force stop after the 2-hour timeout as expected.
Has anyone experienced the same issue, or is there a known workaround/fix?
Thank you in advance for your support and guidance!
Hi @nguyenhieubis , hope you are doing great. May we know if your issue is solved or if you are still experiencing difficulties. Please share the details as it will help the community, especially others with similar issues.
Hi @v-hashadapu , I'm still monitoring it. I don't have this issue anymore now, but I'm not sure if I will have it again. It's still worth monitoring, I know Fabric is still in development and improvement, so I hope this issue will be fixed to make it more stable.
Hi @nguyenhieubis , Thanks for the sharing the information here. I hope the issue is cleared permanently for you. If you have any other queries, please feel free to raise a new post in the community. We are always happy to help.
@nguyenhieubis : Looks it is a known issue in Fabric especially in high concurrency run time in spark. The task time out control only at orchestration layer to wait for a notebook to complete. It does not forcefully terminate the spark session. You can try to set the explicit timeout inside the notebook itself.
Thanks !
User | Count |
---|---|
4 | |
4 | |
2 | |
2 | |
2 |
User | Count |
---|---|
10 | |
8 | |
7 | |
6 | |
6 |