The ultimate Microsoft Fabric, Power BI, Azure AI, and SQL learning event: Join us in Stockholm, September 24-27, 2024.
Save €200 with code MSCUST on top of early bird pricing!
Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started
I like to know from other users if you have exprience anything similar to what the screenshot shows.
- The first attempt (success) occurred after I made some changes to the dataflow and lakehouse table.
- The second attempt was triggered by running the DF (no mods) in a data pipeline.
- The third attempt was manually triggered. DF is unchanged.
- The fourth attempt was manually triggered. DF is unchanged.
The situation is downright ridiculous, with no way to terminate refreshes and no way of knowing how long it's going to keep going in a endless spiral.
Hi - we invested a lot of time into porting very stable Gen1 dataflows over to Gen2 dataflows as the concept is great - however they just are not very stable at the moment and the performance is poor at best.
We have architected a solution by using our old gen1 dataflows to load to datamarts and then using the copy activity in data pipelines to move the data into our Warehouse.
If youa re interested to know more let me know
@BryanCarmichael I appreciate your kind offer. Right now, I have a created a plan B using Azure, which I know will work. So, for now I would say that I can move forward in some way, but I'll let you if I need to take a look at your gen1 solution. Thank you!
Through a support ticket, I found out a column in a source table contained an unsupported data type. The surprising part is that this table had nothing to do with the dataflow. It got me thinking:
1. Does the dataflow refresh look at all source tables in the lakehouse, even those not used?
2. How much data can a dataflow handle? In terms of MB/GB? In terms of the final number of rows and columns?
ebjim I am having the exact same issue - me being the only one using the capactiy and having only one workflow running. Would love to hear any fixes you come up with 🙂
@HimanshuS-msft - could this be a compute issue? Sometimes if I have a flow running it gets locked for 8 hours as well, and then I am also unable to work with any other resources in Fabric.
@DataPne I have no fixes to offer up. Since we as users cannot even terminate any refesh, complaining to MSFT is the only recourse I can think of. Because we get errors that just seem 'way out there', I suspect there are memory leaks or caching problems behind the scenes that have yet to be addressed. When I was using Azure, nothing of this sort happened.
@HimanshuS-msft Thank you for your feedback. As a trial user, I am aware of the limits of assigned resources. That is why I only refresh one DF at a time. What I find troubling is that there seems to be a point of complexity within a DF, from which refreshing totally breaks down (like a cliff dropoff). Often times, I am the only one using the trial account, so it's not a high load situation. I would still encourage the Fabric group at Microsoft to resolve such issues.
Hello @ebjim
Thanks for using the Fabric community.
I am in agreement with you that its does not look right . The same DF takes 8 hrs and then fails and in the best case scenario it completes in 4 mins . I think we need to check if there are other workloads running which is taking the capacity units ,please do read this
If its not related to to CU issues , I will suggest you to work with the MS support as they can dig more on this .
Thanks
HImanshu
Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.
Check out the August 2024 Fabric update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
3 | |
2 | |
2 | |
2 | |
1 |