Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
um4ndr
Advocate I
Advocate I

notebook duration time issue

Hi,

Can anyone explain why the snapshot duration time of the process doesn't match the running duration time. My process is very simple and doesn't involve any complicated tasks. And the most important thing is that this does not happen all the time.

 

um4ndr_0-1750074982518.png

um4ndr_0-1750075705029.png

 

 

1 ACCEPTED SOLUTION
v-lgarikapat
Community Support
Community Support

Hi @um4ndr ,

Thanks for reaching out to the Microsoft fabric community forum.

 

Thanks for sharing the detailed screenshots. Based on the screenshots you provided, I’ve identified a few possible causes and included relevant resources to help you troubleshoot the issue

 

1.Running Duration (4m 36s)
This is correct your code executed quickly.
2.Snapshot Duration (1h 4m 53s)
This includes idle session time, which happens when
The notebook cell was executed, but the session was left open.
Spark session was not explicitly stopped ( spark.stop()).
The notebook was left idle, and the Fabric backend eventually auto closed the session (session timeout).
3.Total Duration (6m 12s)
This is within expectations (queued + run).
4.Pipeline Duration (1h 5m 24s)
Matches the snapshot duration because the pipeline is waiting on the notebook to finish, including idle time.

If your Spark session or notebook finishes and auto disconnects cleanly, durations are short.
If user leaves the notebook open, or if timeouts are triggered (as seen with your timeout_minutes = 60), the session runs until timeout which explains the long duration.
Spark session might be reusing or waiting on idle resources longer depending on cluster load or driver behaviour.

 

 

Below are a few troubleshooting steps

1. Explicitly Stop Spark Session
Add this at the end of your notebook/script
spark.stop()
This ends the session right after execution, avoiding long idle periods.
2. Reduce Timeout Parameter
You're passing timeout_minutes = 60. Lower this to reduce idle wait times:
timeout_minutes = 10  or a value closer to expected runtime
3. Log Notebook Exit
Use logging and notebook exit calls like
from notebookutils import mssparkutils
mssparkutils.notebook.exit("Completed")
4. Check Pipeline Config
In Fabric Pipelines
Ensure Wait for notebook to complete is configured.
Review activity timeout settings  they may inherit session timeouts.
5.Cluster or Fabric Load
On some occasions, backend cluster load or resource allocation delay can contribute to longer session lives. Monitor this from the Spark History Server for deeper analysis

 

I have included previously resolved threads and learning documents here; they may help you resolve the issue

Solved: Re: Stopping Spark Session inside/outside ForEach - Microsoft Fabric Community

Solved: Re: Notebook Status - Stopped (Session Timed Out) - Microsoft Fabric Community

Workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn

Notebook activity - Microsoft Fabric | Microsoft Learn

 

If this post helped resolve your issue, please consider the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.

We appreciate your engagement and thank you for being an active part of the community.

Best regards,
LakshmiNarayana

View solution in original post

5 REPLIES 5
um4ndr
Advocate I
Advocate I

I have another question. This is my pipeline visual guide. 

um4ndr_1-1750146205239.png

 

My question is what is the best practice to use a session mechanism for this structure? 
Is session start time included in CU calculation?
What is the right way to configure high concurrency for pipeline running multiple notebooks?

 



 



Thanks a lot, @um4ndr ! That was a very informative and helpful post. Appreciate you sharing the best practices definitely gave me better clarity on the Fabric notebook session mechanism.

 

Best Regards,

Lakshmi Narayana

v-lgarikapat
Community Support
Community Support

Hi @um4ndr ,

Thanks for reaching out to the Microsoft fabric community forum.

 

Thanks for sharing the detailed screenshots. Based on the screenshots you provided, I’ve identified a few possible causes and included relevant resources to help you troubleshoot the issue

 

1.Running Duration (4m 36s)
This is correct your code executed quickly.
2.Snapshot Duration (1h 4m 53s)
This includes idle session time, which happens when
The notebook cell was executed, but the session was left open.
Spark session was not explicitly stopped ( spark.stop()).
The notebook was left idle, and the Fabric backend eventually auto closed the session (session timeout).
3.Total Duration (6m 12s)
This is within expectations (queued + run).
4.Pipeline Duration (1h 5m 24s)
Matches the snapshot duration because the pipeline is waiting on the notebook to finish, including idle time.

If your Spark session or notebook finishes and auto disconnects cleanly, durations are short.
If user leaves the notebook open, or if timeouts are triggered (as seen with your timeout_minutes = 60), the session runs until timeout which explains the long duration.
Spark session might be reusing or waiting on idle resources longer depending on cluster load or driver behaviour.

 

 

Below are a few troubleshooting steps

1. Explicitly Stop Spark Session
Add this at the end of your notebook/script
spark.stop()
This ends the session right after execution, avoiding long idle periods.
2. Reduce Timeout Parameter
You're passing timeout_minutes = 60. Lower this to reduce idle wait times:
timeout_minutes = 10  or a value closer to expected runtime
3. Log Notebook Exit
Use logging and notebook exit calls like
from notebookutils import mssparkutils
mssparkutils.notebook.exit("Completed")
4. Check Pipeline Config
In Fabric Pipelines
Ensure Wait for notebook to complete is configured.
Review activity timeout settings  they may inherit session timeouts.
5.Cluster or Fabric Load
On some occasions, backend cluster load or resource allocation delay can contribute to longer session lives. Monitor this from the Spark History Server for deeper analysis

 

I have included previously resolved threads and learning documents here; they may help you resolve the issue

Solved: Re: Stopping Spark Session inside/outside ForEach - Microsoft Fabric Community

Solved: Re: Notebook Status - Stopped (Session Timed Out) - Microsoft Fabric Community

Workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn

Notebook activity - Microsoft Fabric | Microsoft Learn

 

If this post helped resolve your issue, please consider the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.

We appreciate your engagement and thank you for being an active part of the community.

Best regards,
LakshmiNarayana

Thank you very much. Very helpful content.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.