Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi Community,
I'm often experiencing issues with my Dataflow Gen2 objects. It frequently happens that scheduled dataflows fail, causing the refresh time to extend from the usual 5 minutes to 2 hours. This results in a massive overuse of our F2 capacity. Normally, F2 is sufficient for our needs, but we constantly have to upgrade/downgrade it to keep using Fabric.
I've tried placing the dataflows in a pipeline and adding a timeout + retry mechanism. Unfortunately, once a dataflow starts, it keeps running, and the pipeline cannot cancel it.
Moreover, I can't seem to find the reason why the dataflow fails in the first place.
Do you have any ideas on how to tackle this issue effectively?
You can actually cancel a refresh of a dataflow
https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-gen2-refresh
Yes, I know that the dataflow can be canceled. Our dataflow runs every night. Even when it runs during office hours, we wouldn't notice since they are automated jobs.
Definitely reach out to the support team to figure out why the Dataflow Gen2 is having issues refreshing:
https://support.fabric.microsoft.com/support
This has also happened to me—I've had to stop the automatic refresh schedule completely and debug the flow before resuming refreshes. It's business-breaking since once over capacity, you can't access any Fabric items. It's kind of insane.
What is the error message after the dataflow fails?
User | Count |
---|---|
3 | |
2 | |
2 | |
2 | |
1 |