Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.

GEGUIRGU

Build Event-Driven Data Pipelines in Microsoft Fabric

Today’s organizations demand real-time responsiveness from their analytics platforms. When data processing relies on scheduled job runs, insights and actions are delayed, and decisions are based on stale data. Whether your data lands in Azure Blob Storage or OneLake, it should be processed the moment it arrives to ensure timely decisions and continuous data freshness. Fabric events and Azure events make that possible by enabling event-driven data workflows that react in real-time to new data, without manual triggers or schedules.

 

In this blog, you’ll learn how to configure an event-driven data pipeline that automatically gets triggered when a new file lands in OneLake or Azure Blob Storage, to ingest and transform the new file.

 

Why Event-Driven Workflows?

Fabric jobs, like data pipelines and notebooks, can be scheduled to run at fixed intervals, but data doesn’t always arrive on a predictable schedule. This mismatch can lead to stale data and delayed insights.

Fabric events and Azure events solve this problem by emitting events when a file is created, updated, or deleted in OneLake or Azure blob storage. These events can be consumed by Activator that can trigger Fabric items (e.g., data pipelines or notebooks) or Power Automate workflows.

This event-driven workflow enables:

  • Faster time-to-insight with real-time data processing
  • Reduced costs by eliminating unnecessary job (i.e. pipeline or notebook) runs
  • Greater automation and responsiveness in your data workflows

Tutorial: Automatically Ingest and Process Files with Event-Driven Pipelines

In this tutorial, you will:

  1. Monitor a folder in OneLake for new CSV files
  2. Trigger a Fabric pipeline when a file is created
  3. Process and load the data into a Lakehouse table, without any manual intervention or a schedule.

GEGUIRGU_0-1754087321830.png

 

Step 1: Create a Lakehouse

First, Let’s create a Lakehouse where we will upload the CSV files and have the resulting table.

  1. In your Fabric workspace, select New item > Lakehouse.
  2. Name it DemoLakehouse and select Create
  3. Right-click on the Files folder, then select New subfolder
  4. Name the subfolder Source and select Create

Step 2: Build Your Data Pipeline

Next, configure a data pipeline to ingest, transform and deliver the data in your Lakehouse

  1. In your workspace, select New item > Data pipeline
  2. Name it DemoPipeline and select Create
  3. In the Pipeline, select Pipeline activity > Copy data with these properties:
    1. In the General tab,
      1.       Name: CSVtoTable
    2. In the Source tab:
      1.       Connection: DemoLakehouse
      2.       Root folder: Files
      3. File path: Source in the Directory
      4. File format: DelimitedText
    3. In the Destination tab:
      1. Connection: DemoLakehouse
      2. Root folder: Tables
      3. Table: SalesTable
    4. In the Mapping tab:
      1. Add two mappings:
        1. date > Date
        2. total > SalesValue
      2. Save the pipeline

Build Your Data Pipeline.png

 

 

Step 3: Set Up a trigger

  1. Go to Real-Time >  Fabric Events page.
  2. Hover over OneLake events to, and select Set Alert to start configuring your trigger.
  3. In the alert configuration:
  1. For the Source, select events with the following properties:
    • Event type(s):
      • Microsoft.Fabric.OneLake.FileCreated
      • Microsoft.Fabric.OneLake.FileDeleted
    • Source:
      • Select DemoLakehouse
      • Select the Source folder
    • Select Next, then Save
  2. For the Action, select Run a Fabric item with the following properties:
    • Workspace: your workspace
    • Item: DemoPipeline
  3. For Save location,
    • Workspace: your workspace
    • Item: Create a new item
    • New item name: My Activator
  4. Select Create

This setup ensures your pipeline runs instantly whenever a new file appears in the source folder.

Set up a trigger.png 

Step 5: Test the Workflow

To test your workflow:

  • Upload this CSV file to the Source folder in your DemoLakehouse
  • A FileCreated event is emitted to trigger the DemoPipeline through My Activator.
  • After processing, you’ll see the SalesTable that includes the newly ingested and transformed data ready for use

No manual refresh. No waiting for the next scheduled run. Your pipeline runs in real-time.

Test the workflow.png

 

The Result: Seamless Automation

With just a few steps, we’ve built a responsive, event-driven workflow. Every time data lands in your Lakehouse, it’s automatically ingested, transformed, and ready for downstream analytics. While this demo focused on OneLake Events, you can achieve the same scenario using Azure Blob Storage events.

 

More Use Cases for Event-Driven scenarios

Beyond the use case we explored, here are additional scenarios where you can leverage OneLake and Azure Blob Storage events in Microsoft Fabric:

  • Trigger a Notebook through Activator for advanced data science preprocessing.
  • Forward events to webhook through Eventstreams for custom compliance and data quality scans.
  • Get alerted when critical datasets are modified through Activator’s Teams and E-mail notifications.

Next steps

Ready to streamline your Fabric applications with an event-driven architecture? Start exploring Fabric events and Azure events today to unlock real-time automation in your data workflows. To learn more, please go to Azure and Fabric Events documentation.

 

Stay tuned for new event group types, consumers, and enhancements for Azure and Fabric Events that will further simplify real-time data processing, automation, and analytics. We are committed to improving the event-driven capabilities in Fabric so we encourage you to share your suggestions and feedback at Fabric Ideas for the Real-Time Hub category and join the conversation on the Fabric Community.