Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Ali_Cruz
Regular Visitor

How can I kill a spark query that has been running for a long period of time in Fabric.

I found a query in spark UI that has been running for 20 hours but I could not find the way to kill it the query is not showing any Running Job IDs and the previous job assosiated with the query has FAILED status. Here is the query that I saw in Spark UI

Ali_Cruz_0-1750800236405.png
This is the job associated with the last Sub Execution IDs

Ali_Cruz_1-1750800331026.png

Does any body experience a similar situation before? Is there a way to kill the query that is still running in Spark?



 






9 REPLIES 9
v-ssriganesh
Community Support
Community Support

Hello @Ali_Cruz,
I wanted to follow up and see if you had a chance to review the information shared. If you have any further questions or need additional assistance, feel free to reach out.
Thank you.

v-ssriganesh
Community Support
Community Support

Hello @Ali_Cruz,

Just checking in have you been able to resolve this issue? If so, it would be greatly appreciated if you could mark the most helpful reply accordingly. This helps other community members quickly find relevant solutions.
Please don’t forget to “Accept as Solution” and Give “Kudos” if the response was helpful.
Thank you.

Ali_Cruz
Regular Visitor

Hi @nilendraFabric 

 

When I checked monitoring hub there was nothing currently running.

Ali_Cruz_0-1751031554159.png

I assumed the query was left behind when the process was cancelled due to a time out issue, is there a way to re-start spark instance in Fabric?

 

Hello @Ali_Cruz,
Thank you for reaching out to the Microsoft Fabric Community and confirming that nothing is listed as running in the Monitoring hub. You’re exactly right this typically happens when a process times out or is force-cancelled, leaving a “ghost” entry behind in the Spark UI.

Regarding your question about restarting the Spark instance in Fabric:

In Microsoft Fabric, you don’t directly restart Spark pools the way you might in Synapse Spark pools.
Instead:

  • Spark compute resources are managed automatically by Fabric’s runtime.
  • When you submit a new job or notebook command, Fabric provisions a fresh session.
  • Any “stuck” UI entries don’t impact the new sessions you can safely ignore them.
  • Usually, the Spark UI metadata will eventually expire and disappear automatically. If you want to be sure, close any notebooks or interactive sessions you had open, then start a fresh Notebook or submit a new job this will spin up a new clean session.
  • You do not need to do anything special to restart Spark manually in Fabric.

If the issue persists or you see resource constraints, I’d recommend:

  • Stopping all interactive sessions in the workspace (close Notebooks) Waiting a few minutes to allow Fabric to recycle resources and Launching a new Notebook or Spark job.

Thank you, @nilendraFabric for sharing valuable insights.

If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.

Hello @Ali_Cruz,
I hope the information provided has been useful. Please let me know if you need further clarification or would like to continue the discussion.
If your question has been answered, please “Accept as Solution” and Give “Kudos” so others with similar issues can easily find the resolution.
Thank you.

Ali_Cruz
Regular Visitor

Hi @nilendraFabric 

I checked monitoring hub but there is nothing in progress

Ali_Cruz_0-1751030983190.png

so i assume the query is something left behind when the job was cancelled because a time out, is there any way to re-start spark in fabric?

 

 

Ali_Cruz
Regular Visitor

Hi NilendraFabric

Yes I checked in monitoring hub, but there is nothing currently running I just saw the query running in spark UI, I assume that query was left behind when the job was terminanted due to a time out.

nilendraFabric
Super User
Super User

Hi @Ali_Cruz 

 

Did you checked in monitoring hub?

 

Navigate to the Monitoring hub in your Fabric workspace.
2. Locate the activity that’s been running for an extended period (e.g., your 20-hour query).
3. Click “Cancel” next to the activity and confirm the action.
4. Wait 2–3 minutes for the system to terminate the process.

 

No visible Job IDs? This indicates a zombie state. Focus on the Monitoring hub method above

 

Hi @nilendraFabric 

When I checked monitoring hub there was nothing currently running.

Ali_Cruz_0-1751031718836.png

 

I assumed the query was left behind when the process was cancelled due to a time out issue, is there a way to re-start spark instance in Fabric?

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.