Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi there,
We have several pipelines where we execute multiple notebooks in series. We have activated high concurrency mode for the notebooks in the pipeline and specified the same session_tag for all of them. However, when we execute the pipeline, it seems that this configuration is not working as intended. The first and third notebooks are calling the same code with different parameter values, as they are only generating logs of the execution (a couple of insert statements), while the Process notebook is responsible for data manipulation.
When we check the output of each step, we can see in their SparkMonitoringURL that the time consumed for the three executed notebooks consists of both queued duration and running duration. We expected that the second and third notebooks would start immediately after the previous one concluded, and that the logging processes in the third notebook would take less time since they involve very basic SQL statements and the environment was already up.
We may be missing or misunderstanding something. What could we do to improve the performance in our pipelines.
Some notes:
Notebooks for 1 and 2 create the records in Lakehouse_Log
Notebook for 3 creates the records in Lakehouse_Data
Workspace's licence : Fabric capacity.
Hi @djbc1986
We are following up once again regarding your query. Could you confirm whether your query has been resolved? If so, kindly mark the helpful response and accept it as the solution to assist other community members in resolving similar issues more efficiently.
Thank You.
Hi @djbc1986
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hi @djbc1986
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
Hi @djbc1986 ,
Thanks for sharing the detailed explanation and screenshots—they’re very helpful for diagnosing the issue.
What you’re experiencing is a common challenge when using high concurrency mode in notebook pipelines, especially in Fabric environments. Based on your screenshots:
A few suggestions to improve this:
Session Reuse:
Double-check that all notebooks in the pipeline are using the exact same session_tag and that session sharing is supported in your environment. In some cases, session reuse is only possible when the notebooks are set up in a very specific way, and minor differences in configuration (such as different libraries or environment settings) can prevent session reuse.
Cluster Startup and Allocation:
The queue times are often related to cluster allocation or startup delays. If your workspace is running at or near Fabric capacity limits, there may not be enough resources available to start all notebooks at once—even with high concurrency. Check your Fabric capacity metrics and consider scaling up if you see frequent queuing.
Pipeline Structure:
If your logging notebooks (1 and 3) are lightweight and don’t need to wait for the main processing, you could potentially restructure the pipeline so that these steps run in parallel rather than strictly in sequence, reducing overall wait time.
Resource Release:
Ensure that notebooks are releasing resources properly at the end of each run (e.g., closing Spark sessions if not needed), so queued steps aren’t waiting for resource cleanup.
Microsoft Docs & Support:
There are some known quirks with session_tag/session reuse in Fabric and Synapse environments. I recommend checking the latest Microsoft documentation and possibly raising a support ticket if you suspect a platform limitation.
Summary:
The main bottleneck here seems to be session or resource allocation rather than actual processing time. Focus on verifying session_tag consistency, monitoring Fabric capacity, and considering pipeline restructuring for lightweight steps.
Let us know if adjusting these settings helps, or if you have any follow-up findings!
Good luck!
Hi @djbc1986
Welcome to the Microsoft Fabric Community Forum.
To address performance issues in Microsoft Fabric pipelines, ensure all prerequisites for effective Spark session reuse are met. Confirm that all notebooks in the pipeline are configured to use the same Spark pool, as session reuse is only possible within a shared pool context. Each notebook must explicitly enable Spark session reuse by setting the same session_tag and selecting the option to reuse an existing Spark session in the notebook activity’s advanced settings. Notebooks should run sequentially to avoid session conflicts, with pipeline dependencies enforcing this order. Minimize idle times between executions to prevent session termination. Optionally, add an initialization notebook at the start to pre-warm the Spark environment. Monitor Fabric capacity utilization to ensure sufficient resources are available, reducing queue times and execution delays. These practices will enhance Spark efficiency and reduce delays.
For reference , please go through the Microsoft official documents below:
Configure high concurrency mode for notebooks in pipelines - Microsoft Fabric | Microsoft Learn
Configure high concurrency mode for notebooks - Microsoft Fabric | Microsoft Learn
Concurrency limits and queueing in Apache Spark for Fabric - Microsoft Fabric | Microsoft Learn
If this response resolves your query, kindly mark it as Accepted Solution to help other community members. A Kudos is also appreciated if you found the response helpful.
Thank you for being part of Fabric Community Forum.
Regards,
Karpurapu D,
Microsoft Fabric Community Support Team.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
49 | |
28 | |
14 | |
14 | |
4 |
User | Count |
---|---|
65 | |
58 | |
23 | |
8 | |
7 |