Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Copy Job needs more functionality

Experimenting with Copy Job. Yeah, OK, but not great. Some suggestions:

1) Make the option for OVERWRITE available for Warehouse destinations. Currently available for Lakehouses only. This article (What is Copy job in Data Factory - Microsoft Fabric | Microsoft Learn) does not differentiate that option. Honestly, how hard it is to program in a DROP TABLE IF EXISTS statement in the back end code? Warehouses support such actions.

 

2) Give the user the abiltiy to DISABLE a table in the list. If I have a list of 50 tables in my job, but during development only want to refresh 2 of them, I have to run the whole job. Waste of time and resources.

Status: New
Comments
ToddChitt
Super User
(this one is listed under a separate Idea:) 3) Allow the Copy Job INCREMENTAL to use a TIMESTAMP or ROWVERSION SQL data type as the watermark column. This data type column is ABSOLUTELY the best type for watermarks. It is: a) immutable, b) non-null always, c) system managed, and d) immune to updates from user processes, and finally, e) managed by the SQL engine (no triggers or app code required to keep it updated). The Azure Data Factory Metadata Driven Copy Wizard is a precursor to the Fabric Copy Job and even THAT did not support TIMESTAMP or ROWVERSION types as watermark. But I have been able to hack those pipelines and Control JSON to make it work. Think about it, this is what this datatype was BORN to do! And conceived and delivered by Microsoft in the SQL engine 20 or so years ago. Yet the Fabric people seem to shun it. So disappointing!
ToddChitt
Super User

Let's add another one: 4) When copying FROM as SQL Server database TO a Lakehouse (that already has the dbo schema available), AND the source tables are already IN the dbo schema, please DO NOT default the destination table names to [dbo].[dbo_<table name>]. Yes, I know I can manually edit the destination table names, but it has to be done for EVERY SINGLE table.

Miwa1
Microsoft Employee

Thanks for your feedback! (3) is already supported, you should see it working by end of Dec 2025 We are evaluating (1) & (2). For (4), during my testing, I pick Azure SQL as source & Lakehouse (with schema support: https://blog.fabric.microsoft.com/en-US/blog/lakehouse-schemas-generally-available/) as destination. They all have a dbo schema, and UI correctly generated the destination table & schema in 2 separate input boxes. Did I miss anything?

ToddChitt
Super User
I think you missed the fact that a Copy Job can also send data to a Lakehouse that does NOT have schema support. Recall that lakehouses with schema support are relatively new, and there are already a LOT of lakehouses out there that do NOT support schemas. Please test again against a legacy lakehouse (NON schema support) and tell me what you get. Is your result acceptable?
Miwa1
Microsoft Employee

I reproduced the behavior that you encountered. I agree that for destination data stores that don't support schema, we should not prefix schema in the auto-generated table name. We will enhance UI and let you know when it's ready, thanks!