Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.
I created a Dataflow Gen2 to get data from Databricks. I can see the preview data very quickly (around 5 seconds). But when I run the dataflow, it takes 8 hours and then cancels with a timeout. I’m trying to get 8 tables with the same schema. Six of them work fine with no problems, but with two of them I’m experiencing the issue I just described. The table sizes are around 50 MB.
What can I do to solve this issue?
Thankyou, @ssrithar and @mabdollahi for your responses.
Hi Martins1234,
We appreciate your inquiry through the Microsoft Fabric Community Forum.
We would like to inquire whether have you got the chance to check the solutions provided by @ssrithar and @mabdollahito resolve the issue. We hope the information provided helps to clear the query. Should you have any further queries, kindly feel free to contact the Microsoft Fabric community.
Thank you.
Hi @Martins1234 ,
In addition what @ssrithar mentioned,
Also worth checking query folding and staging behavior in Dataflow Gen2. The preview only samples data, but during a full run any non-folding step (data type change, rename, reorder, custom column) can force Fabric to process all rows in the mashup engine, which can lead to long runtimes and timeouts.
A few practical additions:
Verify folding stays intact for the two failing tables all the way to the source step.
Disable staging for those queries if it’s enabled.
Load the tables independently (one dataflow per table) to rule out cross-query contention.
Check Fabric capacity pressure during the run — even small tables can stall if the capacity is throttled.
Together with schema alignment and Databricks OPTIMIZE, this usually resolves “fast preview, slow refresh” issues.
Regards,
Mehrdad Abdollahi
A mismatch between the dataflow output and the destination table is the leading cause of such timeouts.Ensure the column order in your Dataflow exactly matches the column order in the destination table.