The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi Developers,
Recently I was working on an automation integrating the Power Automate and Fabric. The requirement is to move the data from the SharePoint Document Library to the Fabric Warehouse/LakeHouse.
I have created a power automate flow which will be triggered when a user add new file to the SharePoint Doc. Library, this will call the API to run the data pipeline.
Now the place where am stucking is when the 'A' user adds the file and triggers the API through the flow and at the same time when user 'B' adds another file it will trigger the another API but it fails saying that the DataPipeline is already in progress.
The business requirement is when the user adds the file, the data should be moved to the fabric wihthout facing any parellel issues.
Solved! Go to Solution.
Hi @JatinSaini ,
Thanks for using Microsoft Fabric Community,
The issue you're facing occurs because Microsoft Fabric Dataflows do not support parallel refresh operations. When two users trigger the same dataflow in quick succession, the second run fails because the dataflow is already in progress. It's designed to prevent data inconsistencies and conflicts that could arise from multiple simultaneous updates to the same data.
But there are some workarounds you can achieve parallel processing,
Queue-Based Orchestration:
Introduce a queue mechanism where each pipeline execution is queued, and the pipeline waits for the dataflow refresh to complete before starting the next execution.
Use a custom Notebook:
Notebooks give you more flexibility and control over the data processing logic.
You can use parameters within the notebook to process individual files or data chunks independently.
This lets you run multiple notebooks concurrently, each processing a different part of the data, significantly speeding up the overall process.
Also you can leverage Fabric's pipeline concurrency capabilities. This allows multiple pipeline instances to run concurrently, each processing a different part of the data.
Please refer the below link for a similar issue:
Triggers-of-Fabric-Pipelines-Concurrency
To get more insights into DataFlow Refresh Limitations, please refer the below link:
dataflow-gen2-refresh#refresh-limitations
If you have any specific questions about how to implement these alternatives or need further clarification, feel free to ask!
If this post helps, please accept as solution to help others find easily and a kudos would be appreciated.
Thank you.
Hi @JatinSaini ,
We wanted to follow up on your query to see if you were able to resolve the issue. If so, marking the response as the solution and leaving a kudos would be greatly appreciated, as it helps others in the community facing similar challenges.
If you're still experiencing difficulties or need further clarification, please let us know.
Best Regards,
Vinay.
Hi @JatinSaini ,
We just wanted to check in again regarding your issue. If you’ve found a solution, marking the reply as the solution and leaving a kudos would be greatly appreciated—it helps the community and others with similar questions.
If you’re still facing challenges or have further questions, please let us know—we’re here to help and would love to assist you in resolving this.
Looking forward to hearing back from you!
Best regards,
Vinay.
Hi @JatinSaini ,
We haven't heard from you since last response and just wanted to check whether the solutions provided have worked for you. If yes, please accept as solution to help others benefit. If not, please reach out.
Thank you.
you can prevent pipeline to run in parallel (it will be queued) setting up concurrency 1.
in this way the dataflow is fired only when the previous run is done.
Pipelines and activities - Azure Data Factory & Azure Synapse | Microsoft Learn
if you can't stand with sequential loading you should change loading process (notebooks or copy activity for example).
Hi @JatinSaini ,
Thanks for using Microsoft Fabric Community,
The issue you're facing occurs because Microsoft Fabric Dataflows do not support parallel refresh operations. When two users trigger the same dataflow in quick succession, the second run fails because the dataflow is already in progress. It's designed to prevent data inconsistencies and conflicts that could arise from multiple simultaneous updates to the same data.
But there are some workarounds you can achieve parallel processing,
Queue-Based Orchestration:
Introduce a queue mechanism where each pipeline execution is queued, and the pipeline waits for the dataflow refresh to complete before starting the next execution.
Use a custom Notebook:
Notebooks give you more flexibility and control over the data processing logic.
You can use parameters within the notebook to process individual files or data chunks independently.
This lets you run multiple notebooks concurrently, each processing a different part of the data, significantly speeding up the overall process.
Also you can leverage Fabric's pipeline concurrency capabilities. This allows multiple pipeline instances to run concurrently, each processing a different part of the data.
Please refer the below link for a similar issue:
Triggers-of-Fabric-Pipelines-Concurrency
To get more insights into DataFlow Refresh Limitations, please refer the below link:
dataflow-gen2-refresh#refresh-limitations
If you have any specific questions about how to implement these alternatives or need further clarification, feel free to ask!
If this post helps, please accept as solution to help others find easily and a kudos would be appreciated.
Thank you.