Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedJoin us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now
Hi Community,
I'm often experiencing issues with my Dataflow Gen2 objects. It frequently happens that scheduled dataflows fail, causing the refresh time to extend from the usual 5 minutes to 2 hours. This results in a massive overuse of our F2 capacity. Normally, F2 is sufficient for our needs, but we constantly have to upgrade/downgrade it to keep using Fabric.
I've tried placing the dataflows in a pipeline and adding a timeout + retry mechanism. Unfortunately, once a dataflow starts, it keeps running, and the pipeline cannot cancel it.
Moreover, I can't seem to find the reason why the dataflow fails in the first place.
Do you have any ideas on how to tackle this issue effectively?
You can actually cancel a refresh of a dataflow
https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-gen2-refresh
Yes, I know that the dataflow can be canceled. Our dataflow runs every night. Even when it runs during office hours, we wouldn't notice since they are automated jobs.
Definitely reach out to the support team to figure out why the Dataflow Gen2 is having issues refreshing:
https://support.fabric.microsoft.com/support
This has also happened to me—I've had to stop the automatic refresh schedule completely and debug the flow before resuming refreshes. It's business-breaking since once over capacity, you can't access any Fabric items. It's kind of insane.
What is the error message after the dataflow fails?
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Fabric update to learn about new features.
User | Count |
---|---|
6 | |
5 | |
2 | |
2 | |
2 |