Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
Vikash_Gohil007
New Member

High Concurrency Sessions in Fabric Pipeline

Hello,

I have a fabric pipeline created which had a for each loop object.

In this for each loop object, there is a notebook execution. This notebook takes input from the foreach object item during each loop

The notebook reads parquet file and processes data and updated in delta tables.

I have also enabled High Concurreny mode for notebooks and pipeline in our workspace.

I am using custom environment for my notebook execution

I have also added a Warm-up notebook step to start a new session at the beginning of the pipeline.

I ahve also updated session tags for each notebook activity.

When I run the pipeline, the HC session gets created which takes around 8-10 mins.

However, the session sharing between notebook execution is not standard. Sometimes, 7-8 notebooks use the shared session and execute in less than 1 minute.

other times, only 1 notebook gets executed in the shared session and a new session gets created on the next notebook execution.

Why this random behaviour shown in pipeline execution?

Anybody else faced this issue?

10 REPLIES 10
v-veshwara-msft
Community Support
Community Support

Hi @Vikash_Gohil007 ,
Just wanted to check if the responses provided were helpful. If further assistance is needed, please reach out.
Thank you.

v-veshwara-msft
Community Support
Community Support

Hi @Vikash_Gohil007 ,
Thanks for posting in Microsoft Fabric Community and for sharing your observations.

The pattern you’re seeing aligns with the current design of High Concurrency (HC) mode in Fabric. A single HC Spark session can have up to five concurrently attached notebooks. Once that limit is reached, Fabric spins up a new HC session even if the earlier one is still active. This can look random in pipelines that use ForEach, since each iteration attaches and detaches notebooks quickly - and depending on timing, some iterations reuse the existing session while others trigger a new one.

 

If your goal is to keep using the same Spark session across multiple sequential notebook executions, one tested approach is to detach each notebook instead of fully stopping the session at the end of execution. You can do this by adding the following command at the end of your notebook:

notebookutils.session.stop(detach=True)

This call keeps the Spark session alive while freeing the notebook slot, allowing later iterations to reuse the same session.

A few key points to remember:

  • HC mode allows up to five notebooks attached to one session simultaneously.

  • Use the same session tag across all activities in the pipeline.

  • Detaching notebooks helps maintain session continuity in sequential or loop-based executions.

This workaround has helped in scenarios where pipelines execute multiple notebook iterations sequentially and where session reuse was inconsistent despite identical settings.

 

For more details, please visit this blog: Fabric Notebook Performance Hack: Reuse Spark Sessions

 

References: Configure high concurrency mode for notebooks - Microsoft Fabric | Microsoft Learn

High concurrency mode in Apache Spark compute for Fabric - Microsoft Fabric | Microsoft Learn

 

Hope this helps. Please reach out for further assistance.

Thank you.

Also many thanks to @tayloramy for continued and valuable guidance.

 

Hello, I tried the option of notebooks.session.stop(detach=True) but still the same issue and the notebook execution is random.

Hi @Vikash_Gohil007 ,

Thanks for confirming and for testing that approach.

Since you’ve already aligned all reuse parameters (same session tag, environment, user identity, and minimal delay between executions) and also tried the detach method, this behavior likely indicates an internal session lifecycle issue rather than a configuration gap.

 

At this point, please collect the session IDs and pipeline run IDs where the reuse pattern breaks. With those details, I’d recommend raising a support ticket so the backend team can review the Spark session logs and confirm why Fabric is intermittently spinning up new sessions despite consistent settings.

 

Thank you.

Hi @Vikash_Gohil007 ,
We wanted to kindly follow up regarding your query. If you need any further assistance, please reach out.

Additionally, could you please confirm whether the issue has been addressed through the support ticket with Microsoft?

If the issue is now resolved, we’d greatly appreciate it if you could share any key insights or the resolution here for the benefit of the wider community.


Thank you.

Vikash_Gohil007
New Member

Hello, thanks for the reply.

There are only 2 notebooks in total in my pipeline, 1st to Warm-Up a Spark session and 2nd in the for each loop.

Both 1st and 2nd notebook activity have same session tags.

1st notebook takes about 9-10 mins and 2nd notebook takes less than 1 min for 8-9 iteration, post that the earlier session stops and a new session is created by the 2nd notebook, which again takes 9-10 mins.

However, this new session sometimes executes 3-4 iterations and sometimes just 1 iterations and session stops.

So session behaviour is absolutely random.

Same notebook is executed sequentially in the foreach loop by passing parameters.

So each time the notebook gets executed, setup is the same like, Session Tag, Environment, Execution User

Hi @Vikash_Gohil007

 

Is there an extended period of time between notebook runs? A session will time out and shut itself down after some inactivity. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution. 

Hello,

Nope, the for each loop only sets some variables before the notebook activity gets executed, so it is hardly a matter of few seconds between the notebook executions.

Hi @Vikash_Gohil007

 

Can you confirm that the notebooks both: 

  • Be run by the same user.
  • Have the same default lakehouse. Notebooks without a default lakehouse can share sessions with other notebooks that don't have a default lakehouse.
  • Have the same Spark compute configurations.
  • Have the same library packages. You can have different inline library installations as part of notebook cells and still share the session with notebooks having different library dependencies.

 If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution. 

tayloramy
Community Champion
Community Champion

Hi @Vikash_Gohil007,

 

Fabric will only reuse a High Concurrency (HC) Spark session when all reuse conditions line up. If any condition isn’t met (even briefly), the next notebook gets a new session-so sometimes many notebooks share, other times only one does.

 

  1. Verify the official reuse conditions for pipelines. All notebooks must be run by the same user identity, live in the same workspace, use the same default Lakehouse, same Spark compute settings, and compatible libraries-plus share the same session tag in the Notebook activity. See Microsoft’s docs: Configure HC for notebooks in pipelines and the HC overview: HC overview.
  2. Pin one session tag everywhere. Use an identical, non-changing value in your warm-up activity and in every downstream Notebook activity (Advanced settings > Session tag). Doc: Notebook activity.
  3. Keep the execution identity consistent. Session sharing is single-user only. If one run uses your identity and another uses a service principal or different user, Fabric won’t reuse the session. Docs: HC mode (single user boundary).
  4. Avoid idle gaps > session timeout. Default Spark session expiry is ~20 minutes unless you change it in Workspace > Data Engineering > Spark settings. If the session expires between ForEach waves, Fabric will start a new one. See: Workspace admin settings and Billing & session expiration.
  5. Match compute + environment exactly. HC reuse requires matching Spark pool settings and environment image. If one notebook requests different node sizes, autoscale limits, or uses a different custom environment, Fabric will spin a new session. Source: Session sharing conditions.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.