Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi,
Can anyone explain why the snapshot duration time of the process doesn't match the running duration time. My process is very simple and doesn't involve any complicated tasks. And the most important thing is that this does not happen all the time.
Solved! Go to Solution.
Hi @um4ndr ,
Thanks for reaching out to the Microsoft fabric community forum.
Thanks for sharing the detailed screenshots. Based on the screenshots you provided, I’ve identified a few possible causes and included relevant resources to help you troubleshoot the issue
1.Running Duration (4m 36s)
This is correct your code executed quickly.
2.Snapshot Duration (1h 4m 53s)
This includes idle session time, which happens when
The notebook cell was executed, but the session was left open.
Spark session was not explicitly stopped ( spark.stop()).
The notebook was left idle, and the Fabric backend eventually auto closed the session (session timeout).
3.Total Duration (6m 12s)
This is within expectations (queued + run).
4.Pipeline Duration (1h 5m 24s)
Matches the snapshot duration because the pipeline is waiting on the notebook to finish, including idle time.
If your Spark session or notebook finishes and auto disconnects cleanly, durations are short.
If user leaves the notebook open, or if timeouts are triggered (as seen with your timeout_minutes = 60), the session runs until timeout which explains the long duration.
Spark session might be reusing or waiting on idle resources longer depending on cluster load or driver behaviour.
Below are a few troubleshooting steps
1. Explicitly Stop Spark Session
Add this at the end of your notebook/script
spark.stop()
This ends the session right after execution, avoiding long idle periods.
2. Reduce Timeout Parameter
You're passing timeout_minutes = 60. Lower this to reduce idle wait times:
timeout_minutes = 10 or a value closer to expected runtime
3. Log Notebook Exit
Use logging and notebook exit calls like
from notebookutils import mssparkutils
mssparkutils.notebook.exit("Completed")
4. Check Pipeline Config
In Fabric Pipelines
Ensure Wait for notebook to complete is configured.
Review activity timeout settings they may inherit session timeouts.
5.Cluster or Fabric Load
On some occasions, backend cluster load or resource allocation delay can contribute to longer session lives. Monitor this from the Spark History Server for deeper analysis
I have included previously resolved threads and learning documents here; they may help you resolve the issue
Solved: Re: Stopping Spark Session inside/outside ForEach - Microsoft Fabric Community
Solved: Re: Notebook Status - Stopped (Session Timed Out) - Microsoft Fabric Community
Workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn
Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn
Notebook activity - Microsoft Fabric | Microsoft Learn
If this post helped resolve your issue, please consider the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.
We appreciate your engagement and thank you for being an active part of the community.
Best regards,
LakshmiNarayana
I have another question. This is my pipeline visual guide.
My question is what is the best practice to use a session mechanism for this structure?
Is session start time included in CU calculation?
What is the right way to configure high concurrency for pipeline running multiple notebooks?
Below is an informative answer on the topic
Solved: Fabric notebook session mechanism - best practices - Microsoft Fabric Community
Thanks a lot, @um4ndr ! That was a very informative and helpful post. Appreciate you sharing the best practices definitely gave me better clarity on the Fabric notebook session mechanism.
Best Regards,
Lakshmi Narayana
Hi @um4ndr ,
Thanks for reaching out to the Microsoft fabric community forum.
Thanks for sharing the detailed screenshots. Based on the screenshots you provided, I’ve identified a few possible causes and included relevant resources to help you troubleshoot the issue
1.Running Duration (4m 36s)
This is correct your code executed quickly.
2.Snapshot Duration (1h 4m 53s)
This includes idle session time, which happens when
The notebook cell was executed, but the session was left open.
Spark session was not explicitly stopped ( spark.stop()).
The notebook was left idle, and the Fabric backend eventually auto closed the session (session timeout).
3.Total Duration (6m 12s)
This is within expectations (queued + run).
4.Pipeline Duration (1h 5m 24s)
Matches the snapshot duration because the pipeline is waiting on the notebook to finish, including idle time.
If your Spark session or notebook finishes and auto disconnects cleanly, durations are short.
If user leaves the notebook open, or if timeouts are triggered (as seen with your timeout_minutes = 60), the session runs until timeout which explains the long duration.
Spark session might be reusing or waiting on idle resources longer depending on cluster load or driver behaviour.
Below are a few troubleshooting steps
1. Explicitly Stop Spark Session
Add this at the end of your notebook/script
spark.stop()
This ends the session right after execution, avoiding long idle periods.
2. Reduce Timeout Parameter
You're passing timeout_minutes = 60. Lower this to reduce idle wait times:
timeout_minutes = 10 or a value closer to expected runtime
3. Log Notebook Exit
Use logging and notebook exit calls like
from notebookutils import mssparkutils
mssparkutils.notebook.exit("Completed")
4. Check Pipeline Config
In Fabric Pipelines
Ensure Wait for notebook to complete is configured.
Review activity timeout settings they may inherit session timeouts.
5.Cluster or Fabric Load
On some occasions, backend cluster load or resource allocation delay can contribute to longer session lives. Monitor this from the Spark History Server for deeper analysis
I have included previously resolved threads and learning documents here; they may help you resolve the issue
Solved: Re: Stopping Spark Session inside/outside ForEach - Microsoft Fabric Community
Solved: Re: Notebook Status - Stopped (Session Timed Out) - Microsoft Fabric Community
Workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn
Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn
Notebook activity - Microsoft Fabric | Microsoft Learn
If this post helped resolve your issue, please consider the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.
We appreciate your engagement and thank you for being an active part of the community.
Best regards,
LakshmiNarayana
Thank you very much. Very helpful content.
User | Count |
---|---|
82 | |
42 | |
16 | |
11 | |
7 |
User | Count |
---|---|
92 | |
88 | |
27 | |
8 | |
8 |