Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hello Community,
We have migrated almost 80% of our workloads from on-prem and Synapse analytics to Fabric however one pain point is monitoring the pipeline runs thru notifications. In Synapse analytics, we could monitr using Azure monitor without need to add logging at each pipeline level. It is not available in Fabric. Is there a plan to launch alert and monitoring dashboard. Log analytics monitoring at workspace level does not work, it is not capturing any event for the pipelines in the workspace. Manual eventstream for Fabric jobs is at an item level therefore not feasible to have streams for 100s of pipelines. This is becoming a big challenge. we are not looking to modify each pipeline and add outlook notification activity or logging.
Solved! Go to Solution.
Hi @Ayush05-gateway ,
Thank you for the update.
I would suggest this workaround to enable pipeline failure notifications without modifying each pipeline, the following pattern is recommended:
Main Pipelines remain unchanged, no internal logging or notification logic is added.
Create wrapper pipelines whose sole purpose is to invoke the main pipelines using the Invoke Pipeline activity.
Configure the "On fail" path of the Invoke Pipeline activity to trigger an Outlook email notification or log failure details to a central store e.g., Lakehouse or Kusto DB.
Deploy the wrapper as a reusable pattern/template to uniformly monitor existing pipelines without altering their internal structure.
Hope this helps,
Warm Regards,
Chaithra E.
Hi @Ayush05-gateway ,
Thank you for the update.
What's New? - Microsoft Fabric | Microsoft Learn list mentions a feature called "Workspace monitoring (Preview)” which helps collect logs/metrics from Fabric itemsYour feedback can help prioritize this feature in future updates.
As this feature is important to you, please consider voting for an existing idea or submitting a new one at Fabric Ideas - Microsoft Fabric Community
For any new questions or topics, feel free to start a new thread in the Microsoft Fabric Community Forum - we’re here to help.
Thank you for being part of the Microsoft Fabric Community.
@gpalsson Thanks for sharing. I had already explored the API earlier, but since we had developed our own wrappers, I didn’t see much value in creating another workaround. Hopefully, observability will be prioritized soon.
It has been many months with no solution in sight for this. Can you give some rough time estimate on a real solution to this issue? I would like to be able to get failure (well, ideally all runs, not just failures) overview with ability to see parameters on workspace level and even tenant level since we have many many workspaces and hundreds of pipelines. Our orginization is just about ready to move to databricks because it's impossible to get a proper overview of what is happening like we had in ADF. It cannot be the solution from MS that each pipeline should go though a main pipeline in case of failure. It's very very far from ideal, especially when we have many workspaces. Even in real time hub we have to choose each and every pipeline and create a streams and activators for each pipeline. It's an extremely bad solution. At the moment we are forced to query the API for runs and statuses to get any kind of comprehensive idea of runtimes and failures across workspaces. But hilariously (not really) the api is rate limited so even that has a limitation.
HI @gpalsson
I agree that it’s been several months without a resolution, and the lack of Azure Monitor integration continues to be a gap — it could have been quite useful for tracking failures.
In our case, since we had to move forward with the migration and needed monitoring, we built custom wrappers around the pipelines. However, this approach isn’t scalable and requires considerable effort to maintain.
I also haven’t seen any progress on enabling these executions to emit monitoring events that could be routed to an event stream — not an ideal solution, but it would at least provide a more complete view of executions, usage trends, and runtimes over time. I’m hopeful the Fabric Platform engineering team will introduce a more robust monitoring solution soon.
@v-echaithra May you provide some insights into progress of this feature. Thanks.
You could look in to the Fabric API (like we did) and build a tool that fetches all the jobids from the api and look at status and does some action if it failed.
https://learn.microsoft.com/en-us/fabric/data-factory/pipeline-rest-api-capabilities
Pretty terrible way of doing it (because it really shouldn't be needed) but it's the only way we found a somewhat scaleable way of monitoring our pipelines.
If you know py you can build it into a notebook and keep everything inside fabric.
Thanks Chaithra
Hi @Ayush05-gateway ,
Thank you for the update.
I would suggest this workaround to enable pipeline failure notifications without modifying each pipeline, the following pattern is recommended:
Main Pipelines remain unchanged, no internal logging or notification logic is added.
Create wrapper pipelines whose sole purpose is to invoke the main pipelines using the Invoke Pipeline activity.
Configure the "On fail" path of the Invoke Pipeline activity to trigger an Outlook email notification or log failure details to a central store e.g., Lakehouse or Kusto DB.
Deploy the wrapper as a reusable pattern/template to uniformly monitor existing pipelines without altering their internal structure.
Hope this helps,
Warm Regards,
Chaithra E.
Hi @Ayush05-gateway ,
Thank you for reaching out to Microsoft Community.
As of now, Microsoft Fabric does not provide a comprehensive monitoring and alerting solution for pipelines that is comparable to Azure Monitor in Synapse Analytics or Azure Data Factory. But you can try this workaround.
To enable failure notifications without modifying every pipeline:
Create your main pipeline with no notification or error handling logic.
Create a second wrapper pipeline, whose only purpose is to invoke the main pipeline using the Invoke Pipeline activity.
In the wrapper pipeline, configure the “On fail” output path of the Invoke Pipeline activity to send a notification using an Outlook email activity or other notification mechanism.
Deploy this pattern as a reusable template to wrap existing pipelines, enabling alerts without modifying their internal structure.
This approach helps you receive failure alerts without embedding logging or email steps inside every pipeline.
Hope this helps.
Best Regards,
Chaithra E.
Hi @v-echaithra
Thank you for your feedback and suggestion. I am doing exactly same, adding a wrapper pipeline invoking main pipeline and configuring notification on failure and capturing the failure reason and status of the the output. Not an ideal solution though. I hope Microsoft releases comprehensive monitoring solution or have a way to extract data for pipeline runs ideally thru eventstream that can be processed using KQL. Such information can then be used not just for monitoring dashboard but also to understand the run times, performance bottlenecks etc. Capacity metrics does provide some of it but it is more in relation to utilization of the capacity. I had thought that the setting at workspace level to enable monitoring i.e. "Add a monitoring Eventhouse" would capture the information but it does nothing.
Thanks,
Ayush
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.