Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.
Check it out now!Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers. Get Fabric certified for FREE! Learn more
Hey all,
I've successfully established a pipeline to import data from Azure Blob into our Fabric Lakehouse. Now, I'm exploring ways to automate the addition of new files from different sources into our Lakehouse. Specifically, is there a method to schedule this pipeline for automatic updates as new data arrives? Additionally, it's crucial that changes in the source data reflect accurately within our Fabric environment.
If this is possible in Fabric, how can I achieve this?
Thanks!
Solved! Go to Solution.
Hi @supri03
Thanks for using Microsoft Fabric Community.
Currently Fabric supports Scheduled triggers for pipelines. But more triggers will be added to Microsoft Fabric shortly. Fabric can use the schedule to automatically run pipeline. We are adding more triggers supported by ADF in Microsoft Fabric.
For more details please refer: Link
I hope this information helps. Please do let us know if you have any further questions.
Thanks.
Just to update as of April 2025 - There are many ways to trigger a pipeline run at this point.
However, if you have something like a continuous data feed into a Fabric lakehouse storage, one great pattern is to have a scanner pipeline running every X minutes to get and process a list of files in child notebooks/pipelines running with some known concurency limit, where each child process destroys or relocates their assigned file after it succeeds with the process.
This guarantees that all files are timely processed. Even if an individual process instance fails - that file will get picked up in the next scan-and-assignment wave. I have had one such hopper-processor running for close to 2 years now, every 5 mins, with sub 0.01% failure rate.
The togglable concurency limit effectively throttles and distributes processing for spiky feed sources, avoiding running out of compute. E.g., if the processing requires a notebook run (or a child pipeline run), and you feed lands 500 (or 10,000, respectively) files, an uncostrained parallel execution will immediately swamp your average capacity's available compute and pipeline nodes.
Hey,
unfirtunately as of today Pipelines only support scheudled trigger and there are no APIs associated with the pipelines to trigger them externally.
So for now, event based triggers are not possible for Fabric pipelines.
Oh Alright, do you have a resource or documentation that talks about this?
Hi @supri03
Thanks for using Microsoft Fabric Community.
Currently Fabric supports Scheduled triggers for pipelines. But more triggers will be added to Microsoft Fabric shortly. Fabric can use the schedule to automatically run pipeline. We are adding more triggers supported by ADF in Microsoft Fabric.
For more details please refer: Link
I hope this information helps. Please do let us know if you have any further questions.
Thanks.
Hi @supri03
Glad that your query got resolved.
Please continue using Fabric Community for any help regarding your queries.
MSFT doc:
https://learn.microsoft.com/en-us/fabric/data-factory/pipeline-runs
User | Count |
---|---|
1 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
6 | |
3 | |
2 | |
2 | |
2 |