Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Alaahady
Frequent Visitor

Upsert data to Lakehouse Table using copy data

I need to transfer table from on-premises sql server to datalake using incremental update (upsert) but it failed, "Operation on target failed, Upsert is not a supported table action for Lakehouse Table.
What is my other options to create incremental refresh with no/less code

12 REPLIES 12
v-dineshya
Community Support
Community Support

Hi @Alaahady ,

Thank you for reaching out to the Microsoft Community Forum.

 

You are trying to perform an incremental upsert from an on-premises SQL Server to a Lakehouse Table using the Copy activity, but encountered the error as “Upsert is not a supported table action for Lakehouse Table.”

 

Please try below things to fix the issue.

 

1. Dataflow Gen2 does not support Lakehouse Tables directly for upsert via Copy activity. you can Load data into a staging table (Fabric SQL) and use Dataflow Gen2 to manage incremental logic and then merge into Lakehouse using a notebook or pipeline.

 

2. Use Notebooks for Custom Upsert Logic, Load data into Lakehouse using Copy activity, Use a Spark notebook to compare and merge data (upsert logic).

 

Note: This allows schema evolution and conditional updates.

 

3. Compare schemas in a Spark notebook. Write changes to a staging configuration (CSV or SQL table). Use Fabric Pipeline to read changes and execute ALTER TABLE or MERGE statements in the Warehouse.

 

4. Use Power BI Premium with Hybrid Tables and then Configure Incremental Refresh with Real-Time Data and then check DirectQuery is enabled for recent partitions.

 

Please refer community thread.

Solved: Incremental refresh og selected refresh using data... - Microsoft Fabric Community

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

Hi @Alaahady ,

We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.

 

Regards,

Dinesh

Hi Dinesh,

Thank you for your recommendation, Your suggestion is amazing, but I was looking for Low code / No code solution

Alaa

Hi @Alaahady ,

You need to transfer table from on-premises sql server to datalake using incremental update (upsert) .  I  have provided all the possible work arounds. But you are expecting Low code / No code.  You can try the "Copy job"  in Data Factory, makes it easy to move data from your source to your destination without creating a pipeline.

 

Please refer below links.

Simplifying Data Ingestion with Copy Job: Upsert to Azure SQL Database & Overwrite to Fabric Lakehou...

How to create a Copy job in Data Factory - Microsoft Fabric | Microsoft Learn

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

Hi @Alaahady ,

We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.

 

Regards,

Dinesh

Hi @Alaahady ,

We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.

 

Regards,

Dinesh

spaceman127
Helper I
Helper I

I have carried out some more tests.

 

Upsert in the copy activity is very sensitive depending on the table and errors are sometimes not reproducible.

 

In my first test it worked wonderfully. Afterwards, with the same settings, it no longer worked.

As also mentioned, this feature is also in preview.

 

I would also go to notebooks as

BhaveshPatel already wrote and I also mentioned in my first answer.

 

Best regards

BhaveshPatel
Community Champion
Community Champion

SQL Server On premises has no incremental refresh option. Move to Spark dataframe ( Data Lake ) and Delta Lake ( Data Lakehouse ) using cloud and it doesn't even require for incremental refresh. ( overwrite ). There is an incremental refresh in Spark but it requires large tables. ( millions of rows ).

 

# Notebooks

# Writing a DataFrame to Delta format 

sdf.write.format("delta").option("overwrite").saveAsTable("DimTable")

Thanks & Regards,
Bhavesh

Love the Self Service BI.
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to give Kudos.
chetanhiwale
Helper III
Helper III

Hi @Alaahady , 

@spaceman127  has given Copy activity approach which is more usefull and you might be able to use it in your solution. On the other hand, Copy Job Fabric Item can also be used to for Upsert operation. Although, Copy Job item can't be included in Data pipeline as activity which is a limitation for now. 


spaceman127
Helper I
Helper I

Ok.

Another option would be to use the copy activity. However, this is also in Preview.

I tested this option some time ago and it worked perfectly.

Perhaps you can also check this in your case.

 

However, if you have already done this, then your configuration of the copy activity would be helpful.

 

Here is a screenshot.

 

Screenshot 2025-07-14 at 20.20.33.png

 

 

spaceman127
Helper I
Helper I

Hello @Alaahady,

 

here is an example of how you could do this with Dataflow Gen2. However, it is currently only partially available in preview.

That might help.


https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-gen2-incremental-refresh

 

Another option would be to load the data into the lakehouse and then perform an update using a notebook.

However, this naturally requires a little extra effort.

 

Best regards 

Thank you Spaceman, I may call you suprman for promot repond.

I tried this option but I got the following message:

 

Incremental refresh

⚠️ It is not supported to setup incremental refresh on a query when:

it uses a default destination
or it uses a data destination that is not supported
Please update the data destination to a supported data destination (Fabric Warehouse, Fabric SQL, and Azure SQL) before setting up incremental refresh.

 

 

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.