Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the FabCon + SQLCon recap series. Up next: Power BI, Real-Time Intelligence, IQ and AI, and Data Factory take center stage. All sessions are available on-demand after the live show. Register now

Reply
kiltNone
New Member

Dataflow Publish Failed (gen 2)

I am creating a dataflow gen 2 that reads data from a data source and write it a Lakehouse in the same Fabric workspace. I works fine if I don't set incremental refresh but once I set incremental refresh for the same table it errors on publishing. Data destination manually set as LakeHouse. (update method : replace, schema : fixed)

Dataflow publish failed
Unexpected error occurred while creating your data destination. Please contact customers support.

 

I tried the same procss with dataflow (CI/CD) it works when no incremental refresh is set but when incremental refresh is set it fails validation with the same error.

Screenshot 2026-02-16 160812.png

 

1 ACCEPTED SOLUTION
kiltNone
New Member

Hi, I manage to make it work by removing comments in the query and streamline the query.

I set incemental refresh first once successful, I add the old data running one time with append.

 

Thank you!

View solution in original post

5 REPLIES 5
v-echaithra
Community Support
Community Support

Hi @kiltNone ,

Glad to hear you were able to get it working.

Streamlining the query and removing inline comments can help ensure that the query definition validates cleanly and preserves full query folding, which is required for Incremental Refresh to be accepted during publishing.
Configuring Incremental Refresh first allows the dataflow to create and take ownership of the destination table with the expected metadata and structure.
Your approach is valid and aligns with the recommended guidance Incremental Refresh must establish and own the table initially, which you enabled before loading the historical data. The one-time append functions as an initial backfill, while subsequent refreshes continue to follow the replace only behavior managed by the dataflow.

vechaithra_0-1771571865237.png


Reference: Incremental refresh in Dataflow Gen2 - Microsoft Fabric | Microsoft Learn
If further assistance is still required. We are available to support you and are committed to helping you reach a resolution.


Thank you.

kiltNone
New Member

Hi, I manage to make it work by removing comments in the query and streamline the query.

I set incemental refresh first once successful, I add the old data running one time with append.

 

Thank you!

v-echaithra
Community Support
Community Support

Hi @kiltNone ,

Thank you for the response.

Although Incremental Refresh is listed as supported for Lakehouse destinations, it currently works under replace-only behavior where the Dataflow must have full ownership and control of how the table is created and maintained.

If Fabric cannot guarantee that level of control during validation for example, when the table lifecycle or write pattern falls outside this tightly managed model, the publish step can fail with the generic data-destination error you’re seeing. This typically happens in scenarios beyond the simple ingestion pattern described in the documentation.

For Lakehouse-based engineering workloads, we recommend implementing incremental processing downstream using Spark (for example, MERGE operations in a Notebook or Pipeline) rather than relying on Dataflow Gen2 Incremental Refresh.

Best Regards
Chaithra E.

v-echaithra
Community Support
Community Support

Hi @kiltNone ,

Thank you for reaching out to Microsoft Community.

The failure is not caused by configuration error alone, but by how Dataflow Gen2 currently implements Incremental Refresh with Lakehouse:

Incremental extraction is supported but converting an existing table to incremental is not.
Destination writes are full replacements, not merges. Validation gaps surface as a generic publishing error.

If the Lakehouse table was originally created by a non-incremental load, Fabric cannot convert it into an incremental table when IR is later enabled.The destination must be explicitly configured (Lakehouse, schema mode, and mapping must be defined before publishing).
Even when Incremental Refresh is enabled, Dataflow Gen2 rewrites the destination table during refresh.
It does not support transactional append/upsert semantics like Spark or Warehouse pipelines.

Recommended Workarounds:

Recreate the Destination Table
You must start with a clean table created by the incremental-enabled dataflow. Ensure Incremental Refresh is configured. Publish the Dataflow again so Fabric creates the table with the correct metadata. Do not enable IR on top of an already-loaded table. Ensure the Incremental Filter Is the First Transformation.

Hope this helps.
Thank you.

Thanks for the advice. I tried creating a new Lakehouse and try to run with Incremental Refresh configure from the beginning. I still encounter the same error.

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Solution Authors
Top Kudoed Authors