The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredAsk the Fabric Databases & App Development teams anything! Live on Reddit on August 26th. Learn more.
I have a pipeline that runs 4 notebook tasks, using the High concurrency feature for running multiple notebooks. I set the task timeout to "0.02:00:00".
However, since August 1, 2025, I have encountered around 4–5 cases where the Spark session gets stuck in a running state (e.g., 39 hours) and does not force stop after the 2-hour timeout as expected.
Has anyone experienced the same issue, or is there a known workaround/fix?
Thank you in advance for your support and guidance!
@nguyenhieubis : Looks it is a known issue in Fabric especially in high concurrency run time in spark. The task time out control only at orchestration layer to wait for a notebook to complete. It does not forcefully terminate the spark session. You can try to set the explicit timeout inside the notebook itself.
Thanks !
User | Count |
---|---|
4 | |
2 | |
2 | |
2 | |
2 |
User | Count |
---|---|
10 | |
8 | |
7 | |
6 | |
6 |