Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
GammaRamma
Frequent Visitor

Cannot overwrite tables with copy activity when change data feed is enabled

The problem is with lakehouse tables on which I have enabled this:

 

spark.sql(f"ALTER TABLE {rawLkhParam}.{tableSchemaParam}.{tableNameParam} SET TBLPROPERTIES ('delta.enableChangeDataFeed' = 'true')")
 
After this is enabled, if I try to overwrite the table with a copy activity in a data pipeline, I get the following error:
 
Failure happened on 'destination' side. 'Type=System.InvalidOperationException,Message=Cannot downgrade the minimum writer version,Source=Microsoft.DataTransfer.ClientLibrary,'
 
I can't find any way to downgrade the minimum writer. I am using schema enabled lakehouses. I don't want to delete the table before running the pipeline because then I loose the delta history whereas with a copy activity overwrite I can keep it.
 
Any help is appreciated. Thanks!
 
 
 
2 REPLIES 2
Anonymous
Not applicable

Hi @GammaRamma ,

 

One way to do this is to use the .mode(“overwrite”) and .option(“overwriteSchema”, “true”) options when writing data to the table. This method allows you to update the table schema without deleting the table, thus preserving incremental history.

Such as below code:

dataframe.write \
    .format("delta") \
    .mode("overwrite") \
    .option("overwriteSchema", "true") \
    .partitionBy(<your-partition-columns>) \
    .saveAsTable("<your-table>")

 

Best Regards,
Adamk Kong

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Hello @Anonymous, thank you for your response.

I have not tested this solution, but even if it does work, it doesn't really solve my problem the way I want. Using your method, I would have to create another table temporarily from my on prem source and then use a notebook to load the temp table as a dataframe and overwrite my original table. 

In a real bind this could be useful, but it's really not very efficient for a design solution. I think this is a problem with the copy activity which should be adressed by Microsoft.

Thanks anyway!

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.