Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
I created a Dataflow Gen2 to get data from Databricks. I can see the preview data very quickly (around 5 seconds). But when I run the dataflow, it takes 8 hours and then cancels with a timeout. I’m trying to get 8 tables with the same schema. Six of them work fine with no problems, but with two of them I’m experiencing the issue I just described. The table sizes are around 50 MB.
What can I do to solve this issue?
Hi Martins1234,
Thankyou for the followup.
Based on my understanding, the behaviour may be caused by a Dataflow Gen2 execution limitation in Microsoft Fabric when loading certain Databricks Delta tables. Although the preview runs quickly and Databricks processes the query in milliseconds, a full Dataflow Gen2 refresh can fall back to the mashup engine and fully materialise all rows within Fabric capacity. This can result in high memory usage, capacity pressure, and eventual timeouts, even for relatively small tables of 40 to 50 MB. Applying a filter reduces the amount of data processed, which explains why the operation succeeds. Therefore, this might not be an issue with Databricks or the data quality.
Given the consistent timeout behaviour, please raise a support ticket using the provided link so that backend execution logs can be reviewed for confirmation and validation Microsoft Fabric Support and Status | Microsoft Fabric
We hope this information helps to resolve the issue. Should you have any further queries, please feel free to contact the Microsoft Fabric community.
Thank you.
I tried following all the recommendations. The dataflow doesn’t work, even when using only one table, and I don’t know what else to try. I also checked in Databricks, and the request is processed in just a few milliseconds.
It just work when I apply filter to bring less rows in the dataflow, but it doesn't make sense, the table has just 40mb.
Hi @Martins1234
I was facing the same issue a couple of days ago, even though I wasn’t using many transformations—only appending tables. Splitting the queries into smaller chunks helped resolve the problem.
Proud to be a Super User!
Hi Martins1234,
We are following up to see if what we shared solved your issue. If you need more support, please reach out to the Microsoft Fabric community.
Thank you.
Thankyou, @ssrithar and @mabdollahi for your responses.
Hi Martins1234,
We appreciate your inquiry through the Microsoft Fabric Community Forum.
We would like to inquire whether have you got the chance to check the solutions provided by @ssrithar and @mabdollahito resolve the issue. We hope the information provided helps to clear the query. Should you have any further queries, kindly feel free to contact the Microsoft Fabric community.
Thank you.
Hi @Martins1234 ,
In addition what @ssrithar mentioned,
Also worth checking query folding and staging behavior in Dataflow Gen2. The preview only samples data, but during a full run any non-folding step (data type change, rename, reorder, custom column) can force Fabric to process all rows in the mashup engine, which can lead to long runtimes and timeouts.
A few practical additions:
Verify folding stays intact for the two failing tables all the way to the source step.
Disable staging for those queries if it’s enabled.
Load the tables independently (one dataflow per table) to rule out cross-query contention.
Check Fabric capacity pressure during the run — even small tables can stall if the capacity is throttled.
Together with schema alignment and Databricks OPTIMIZE, this usually resolves “fast preview, slow refresh” issues.
Regards,
Mehrdad Abdollahi
A mismatch between the dataflow output and the destination table is the leading cause of such timeouts.Ensure the column order in your Dataflow exactly matches the column order in the destination table.