The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I found a query in spark UI that has been running for 20 hours but I could not find the way to kill it the query is not showing any Running Job IDs and the previous job assosiated with the query has FAILED status. Here is the query that I saw in Spark UI
This is the job associated with the last Sub Execution IDs
Does any body experience a similar situation before? Is there a way to kill the query that is still running in Spark?
Hello @Ali_Cruz,
I wanted to follow up and see if you had a chance to review the information shared. If you have any further questions or need additional assistance, feel free to reach out.
Thank you.
Hello @Ali_Cruz,
Just checking in have you been able to resolve this issue? If so, it would be greatly appreciated if you could mark the most helpful reply accordingly. This helps other community members quickly find relevant solutions.
Please don’t forget to “Accept as Solution” and Give “Kudos” if the response was helpful.
Thank you.
When I checked monitoring hub there was nothing currently running.
I assumed the query was left behind when the process was cancelled due to a time out issue, is there a way to re-start spark instance in Fabric?
Hello @Ali_Cruz,
Thank you for reaching out to the Microsoft Fabric Community and confirming that nothing is listed as running in the Monitoring hub. You’re exactly right this typically happens when a process times out or is force-cancelled, leaving a “ghost” entry behind in the Spark UI.
Regarding your question about restarting the Spark instance in Fabric:
In Microsoft Fabric, you don’t directly restart Spark pools the way you might in Synapse Spark pools.
Instead:
If the issue persists or you see resource constraints, I’d recommend:
Thank you, @nilendraFabric for sharing valuable insights.
If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.
Hello @Ali_Cruz,
I hope the information provided has been useful. Please let me know if you need further clarification or would like to continue the discussion.
If your question has been answered, please “Accept as Solution” and Give “Kudos” so others with similar issues can easily find the resolution.
Thank you.
Hi @nilendraFabric
I checked monitoring hub but there is nothing in progress
so i assume the query is something left behind when the job was cancelled because a time out, is there any way to re-start spark in fabric?
Hi NilendraFabric
Yes I checked in monitoring hub, but there is nothing currently running I just saw the query running in spark UI, I assume that query was left behind when the job was terminanted due to a time out.
Hi @Ali_Cruz
Did you checked in monitoring hub?
Navigate to the Monitoring hub in your Fabric workspace.
2. Locate the activity that’s been running for an extended period (e.g., your 20-hour query).
3. Click “Cancel” next to the activity and confirm the action.
4. Wait 2–3 minutes for the system to terminate the process.
No visible Job IDs? This indicates a zombie state. Focus on the Monitoring hub method above
When I checked monitoring hub there was nothing currently running.
I assumed the query was left behind when the process was cancelled due to a time out issue, is there a way to re-start spark instance in Fabric?
User | Count |
---|---|
14 | |
9 | |
5 | |
3 | |
2 |
User | Count |
---|---|
44 | |
23 | |
17 | |
13 | |
12 |