Don't miss your chance to take exam DP-600 or DP-700 on us!
Request nowFabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now
Hi, can someone help me understand the notebook run series or refer me to the detailed documentation ( I have read MS Fabric documentation already https://learn.microsoft.com/en-us/fabric/data-engineering/apache-spark-monitor-run-series).
Solved! Go to Solution.
Hi @Jeanxyz , Thank you for reaching out to the Microsoft Community Forum.
@suparnababu8 is correct, the anomaly flag isn’t based on duration alone. The Run Series uses a multivariate detector that looks at patterns across duration, shuffle volume, bytes read/written, executor usage and idle time and task level skew. A run can look visually normal on the duration chart and still be marked anomalous if one of those other signals diverges from the 30 day baseline enough to pass the detector’s threshold.
The Run Series UI only shows the label the underlying reason is only visible indirectly. To understand it, you need to open that specific run in the Spark History Server and compare it against a neighbouring run. Look at stage timelines, skewed tasks, sudden spikes in shuffle size, higher executor idle % or unusually uneven task distribution. Those internal differences are the ones that typically produce the anomaly flag even when the top level bar looks identical.
The second part of your question also needs clarification. The Spark configuration panel on the notebook page shows only the workspace’s allocation limits. It does not reflect what your run actually consumed. Actual executor count, actual vCores used, executor lifespan and CPU/idle percentages are only visible in the Spark History Server. Dynamic allocation means your run may have used fewer executors than the upper limit, depending on load. If you want the capacity view (CU spikes by day or by item), that comes from the Fabric Capacity Metrics app but that app does not expose executor-level detail.
Debug apps with the extended Apache Spark history server - Microsoft Fabric | Microsoft Learn
Monitor Apache Spark run series - Microsoft Fabric | Microsoft Learn
What is the Microsoft Fabric Capacity Metrics app? - Microsoft Fabric | Microsoft Learn
Thank you @suparnababu8 for your valuable response.
Hi @Jeanxyz , hope you are doing great. May we know if your issue is solved or if you are still experiencing difficulties. Please share the details as it will help the community, especially others with similar issues.
Hi @Jeanxyz , Thank you for reaching out to the Microsoft Community Forum.
@suparnababu8 is correct, the anomaly flag isn’t based on duration alone. The Run Series uses a multivariate detector that looks at patterns across duration, shuffle volume, bytes read/written, executor usage and idle time and task level skew. A run can look visually normal on the duration chart and still be marked anomalous if one of those other signals diverges from the 30 day baseline enough to pass the detector’s threshold.
The Run Series UI only shows the label the underlying reason is only visible indirectly. To understand it, you need to open that specific run in the Spark History Server and compare it against a neighbouring run. Look at stage timelines, skewed tasks, sudden spikes in shuffle size, higher executor idle % or unusually uneven task distribution. Those internal differences are the ones that typically produce the anomaly flag even when the top level bar looks identical.
The second part of your question also needs clarification. The Spark configuration panel on the notebook page shows only the workspace’s allocation limits. It does not reflect what your run actually consumed. Actual executor count, actual vCores used, executor lifespan and CPU/idle percentages are only visible in the Spark History Server. Dynamic allocation means your run may have used fewer executors than the upper limit, depending on load. If you want the capacity view (CU spikes by day or by item), that comes from the Fabric Capacity Metrics app but that app does not expose executor-level detail.
Debug apps with the extended Apache Spark history server - Microsoft Fabric | Microsoft Learn
Monitor Apache Spark run series - Microsoft Fabric | Microsoft Learn
What is the Microsoft Fabric Capacity Metrics app? - Microsoft Fabric | Microsoft Learn
Thank you @suparnababu8 for your valuable response.
Hello @Jeanxyz
Anamolies in Sprak run series are falgged based on multiple metrices, it's not just based on duiration. Anamolies deletected based on
- Devation from histricaal patterns in duration , data size and resources utlizatiopn
-Execution distribution shifts, unusally high idle.
I would reocmmend you to check, open the spark history server for the anomalus run and compare stage level metrics , task distribution and data shuffle patterns based on runs. Please go through this thread What is Spark run series analysis? - Microsoft Fabric | Microsoft Learn
Next, while coming to your 2nd questions, The spark panel configuration is showing the range like 1-9 executors, but actual usage should be depends on dynamic allocation of job demands. SO, please go to the spark historical run serevr and you can view executive summary, stage details, envuironment tab and monitoring tab. Also you can sue Fabric capcity metrics app to vie the CU's consumed and who is utlizing the the notebooks like below
Pleas go thorugh this Install the Microsoft Fabric capacity metrics app - Microsoft Fabric | Microsoft Learn
Hope this helps you
Thank you!!
Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 21 | |
| 11 | |
| 9 | |
| 6 | |
| 6 |
| User | Count |
|---|---|
| 40 | |
| 22 | |
| 21 | |
| 15 | |
| 14 |