Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
We want to receive email notifications for any failed scheduled items across our workspace. These items include:
Pipeline-level notifications
Activators for failed pipeline items
Workspace-level Eventstream + Activator
A universal, scalable solution that:
Thanks you
Solved! Go to Solution.
Hi @kkoc3,
Good question, and I understand the frustration. Right now, there isn’t a single, fully plug-and-play way to capture all failed runs at the workspace level. Data Activator does support workspace item events, but there are a few nuances that often trip people up.
Data Activator needs to be wired to the Fabric Workspace Item Events feed.
That feed emits run events (success, failure, in-progress) for pipelines, notebooks, and dataflows.
If no failures appeared, usually it’s because the reflex wasn’t filtering on the right event property or wasn’t subscribed to the correct workspace scope. Sometimes the event stream feels incomplete if not configured precisely.
Go to Get Data → Real-Time Hub → Fabric Workspace Item Events and connect your workspace.
Build a Reflex on that stream in Data Activator.
Create a rule such as:
Condition: Status = Failed (or the equivalent field for run state).
Scope: your target workspace.
Attach an action (email, Teams) so that when any item fails, you get notified.
Monitoring Fabric Workspace Item Events with Data Activator – step-by-step blog with screenshots.
If you still see gaps, pair Data Activator with a central logging pipeline:
Each scheduled item writes its status into a log table.
A simple monitoring pipeline or Power Automate flow checks that table and sends an alert if a failure is logged.
This ensures nothing is missed, even if the event stream skips something.
Hi @kkoc3,
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.
Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support
Hi @kkoc3,
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.
@MJParikh ,Thanks for your prompt response
Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support
Hi @kkoc3,
Thanks for actively participating in Fabric community.
@MJParikh, Thaks for your prompt response here.
please refer below community sources, hope this might help you get started with your requirements, let me know if these helps: https://community.fabric.microsoft.com/t5/Webinars-and-Video-Gallery/Fabric-Monday-63-Execution-Aler...
Thanks,
Prashanth
In my opinion, For receiving email alerts for any failed scheduled items (pipelines, notebooks, dataflows Gen2) across a workspace with a universal, low-maintenance, and robust solution, here are some of the best approaches and insights from current community experiences and technical sources:
Use Data Activator for Workspace-Level Failure Alerts
Microsoft Fabric's Data Activator can capture failure events from pipelines, notebooks, and dataflows.
Set up workspace-level alerts by creating rules that detect run failures across all scheduled items.
Alerts can be triggered centrally and sent via email or Microsoft Teams.
This offers a centralized, scalable monitoring and notification system without needing individual pipeline configurations.
Example: Configure a Data Activator rule that filters for failed activities in the entire workspace, then sends notifications accordingly.
(Reference: Fabric community forum discussions and tutorials on Data Activator usage for execution alerts)
Use Workspace Event Stream (if reliable)
Capturing failure events from the workspace event stream is theoretically possible.
Requires correct configuration to listen to all failure events across pipelines, notebooks, and dataflows.
Can be wired to an Azure Function or Power Automate flow that sends email notifications.
Note: This approach reportedly may have unstable behavior and might need troubleshooting or feature updates.
Central Logging + Notification Pipeline
Create a dedicated logging pipeline that all scheduled items report status into (status logs, success/failure).
This pipeline checks logs periodically and triggers email alerts if any failure is logged.
Can be automated with a notebook or script within the pipeline environment querying run statuses via API.
This is more robust but slightly more complex to implement initially, as it needs the logging system setup.
Power Automate / Logic Apps Integration
Use Power Automate flows or Azure Logic Apps to listen to run status change events or API query results.
On failure, send emails or notifications.
This is flexible and integrates well into Microsoft ecosystems but still requires some setup and API knowledge.
Avoid configuring failure alerts individually per pipeline, especially if there are many scheduled items.
Activators are good for pipelines but may not cover notebooks or dataflows.
Relying solely on manual scripting or checking statuses isn't scalable.
Explore setting up Data Activator rules in your workspace for failure events on all scheduled items.
If Data Activator is insufficient, combine it with Power Automate or Logic Apps for custom notification workflows.
Consider building a central logging pipeline or notebook that collects error details and triggers alerts.
Monitor community updates for improved workspace-level native alerting features.
Hi
Thanks for you comment
Can you show me some posts , guides etc for below option ? I am unable to find anything and as i outlined in my initial post it did not work for me . It did not work becasue i was not gettinng any failure results at workspace level even though things failed .
Use Data Activator for Workspace-Level Failure Alerts
Hi @kkoc3,
Good question, and I understand the frustration. Right now, there isn’t a single, fully plug-and-play way to capture all failed runs at the workspace level. Data Activator does support workspace item events, but there are a few nuances that often trip people up.
Data Activator needs to be wired to the Fabric Workspace Item Events feed.
That feed emits run events (success, failure, in-progress) for pipelines, notebooks, and dataflows.
If no failures appeared, usually it’s because the reflex wasn’t filtering on the right event property or wasn’t subscribed to the correct workspace scope. Sometimes the event stream feels incomplete if not configured precisely.
Go to Get Data → Real-Time Hub → Fabric Workspace Item Events and connect your workspace.
Build a Reflex on that stream in Data Activator.
Create a rule such as:
Condition: Status = Failed (or the equivalent field for run state).
Scope: your target workspace.
Attach an action (email, Teams) so that when any item fails, you get notified.
Monitoring Fabric Workspace Item Events with Data Activator – step-by-step blog with screenshots.
If you still see gaps, pair Data Activator with a central logging pipeline:
Each scheduled item writes its status into a log table.
A simple monitoring pipeline or Power Automate flow checks that table and sends an alert if a failure is logged.
This ensures nothing is missed, even if the event stream skips something.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.