Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more

Reply
um4ndr
Advocate I
Advocate I

Fabric notebook session mechanism - best practices

This is my pipeline visual guide. 

 

um4ndr_0-1750230645546.png

 

My question is what is the best practice to use a session mechanism for this structure? 
Is session start time included in CU calculation?
What is the right way to configure high concurrency for pipeline running multiple notebooks?

 

1 ACCEPTED SOLUTION
burakkaragoz
Community Champion
Community Champion

Hi @um4ndr ,

 

Great pipeline structure! Here are some best practices and clarifications for session management and concurrency in Microsoft Fabric notebooks:

  1. Best Practices for Session Mechanism:
  • Use session reuse: Where possible, reuse the same notebook session for dependent steps within your pipeline. This reduces overhead and speeds up execution, especially if you have multiple steps accessing the same Spark context or data.
  • Avoid unnecessary session holds: Only keep sessions alive for as long as needed. Close sessions after critical tasks to free up resources.
  • Semaphore/flag usage: Like in your diagram, use semaphore logic to manage notebook states and dependencies, ensuring that parallel execution does not overload your capacity or cause race conditions.
  1. Is Session Start Time Included in CU (Capacity Unit) Calculation?
  • Yes, session start time is included in CU billing. The entire lifespan of the session—from when it starts until it is explicitly closed or times out—counts towards your Capacity Unit (CU) consumption. This includes any idle/wait time if the session is held open.
  • For cost efficiency, always close notebook sessions as soon as their work is done.
  1. Configuring High Concurrency for Multiple Notebooks:
  • Adjust concurrency settings: In the pipeline or notebook activity settings, set the maximum concurrency level based on your workspace’s CU limits and the expected workload.
  • Monitor resource usage: Use the Fabric monitoring tools to check for bottlenecks or CU saturation. If you hit resource limits, consider staggering notebook runs or increasing your workspace capacity.
  • Optimize notebook code: Ensure notebooks are optimized for parallel execution—avoid global state, minimize data shuffling, and use partitioning where appropriate.
  • Use dataflows where possible: For independent data transformations, utilize dataflows to offload some work from notebooks and increase overall throughput.

Summary:

  • Reuse sessions where it makes sense, but close them promptly.
  • Session start and hold time are included in CU calculations.
  • Tune concurrency based on your CU capacity, and monitor actual usage to avoid throttling or failures.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

View solution in original post

2 REPLIES 2
um4ndr
Advocate I
Advocate I

Thank you for your quick and informative response!

burakkaragoz
Community Champion
Community Champion

Hi @um4ndr ,

 

Great pipeline structure! Here are some best practices and clarifications for session management and concurrency in Microsoft Fabric notebooks:

  1. Best Practices for Session Mechanism:
  • Use session reuse: Where possible, reuse the same notebook session for dependent steps within your pipeline. This reduces overhead and speeds up execution, especially if you have multiple steps accessing the same Spark context or data.
  • Avoid unnecessary session holds: Only keep sessions alive for as long as needed. Close sessions after critical tasks to free up resources.
  • Semaphore/flag usage: Like in your diagram, use semaphore logic to manage notebook states and dependencies, ensuring that parallel execution does not overload your capacity or cause race conditions.
  1. Is Session Start Time Included in CU (Capacity Unit) Calculation?
  • Yes, session start time is included in CU billing. The entire lifespan of the session—from when it starts until it is explicitly closed or times out—counts towards your Capacity Unit (CU) consumption. This includes any idle/wait time if the session is held open.
  • For cost efficiency, always close notebook sessions as soon as their work is done.
  1. Configuring High Concurrency for Multiple Notebooks:
  • Adjust concurrency settings: In the pipeline or notebook activity settings, set the maximum concurrency level based on your workspace’s CU limits and the expected workload.
  • Monitor resource usage: Use the Fabric monitoring tools to check for bottlenecks or CU saturation. If you hit resource limits, consider staggering notebook runs or increasing your workspace capacity.
  • Optimize notebook code: Ensure notebooks are optimized for parallel execution—avoid global state, minimize data shuffling, and use partitioning where appropriate.
  • Use dataflows where possible: For independent data transformations, utilize dataflows to offload some work from notebooks and increase overall throughput.

Summary:

  • Reuse sessions where it makes sense, but close them promptly.
  • Session start and hold time are included in CU calculations.
  • Tune concurrency based on your CU capacity, and monitor actual usage to avoid throttling or failures.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.