Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Be one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now

Reply
dhorrall
Helper I
Helper I

How to handle schema drift?

In legacy datafactory there were options to explicitly allow schema drift.  I do not see that in Fabric.  Am I missing it?

For example,

  1. I was just doing some random testing of loading historical blob text files into a 'table' in Fabric to kick the tires
  2. I had created some intermediary parquet files from the text files to get practice with that
  3. I then attempted to load that parquet files to a 'table', and got the error..'Source column is not defined in delta meta data'
  4. Obviously because files have columns that evolved over time

I don't see a straight-forward way to handle this.

8 REPLIES 8
eldpbi
Frequent Visitor

So Consider this scenario: Copying multiple csv files from OneLake to a Data warehouse within Fabric capacity. CSVs have different schemas, and the tables in the Data Warehouse have to be autocreated (will require schema drift to be on). Now since the schemas are different, in the "Mapping" section of the "Copy" activity, one cannot provide the schemas, but still the current pipelines in Fabric do ask for that.

For more context - GetMetadata gets all the childitems in the onelake folder, and ForEach runs the CopyData activity on all the ChildItems.

eldpbi_0-1725333700143.png


@ajarora @GraceGu @haha 



Jreed_7474
New Member

Any update on if schema drift will be added in the futre? especially being it looks like that functionality exsists in ADF?

GraceGu
Microsoft Employee
Microsoft Employee

Suppose the ask is for explict schema mapping in copy activity exists in ADF today. Editing mapping for Lakehouse destination will be coming in 1-2 month.  @dhorrall what destination you are looking for? 

dhorrall
Helper I
Helper I

Probably all of the above.  Current 'datafactory' has checkbox to handle.  I see nothing like this in Fabric.  This was the basis of my question.

ajarora
Microsoft Employee
Microsoft Employee

What you are referring to is possible through Azure Data Factory Mapping Dataflows, but they are not available in Fabric. Perhaps you want to try out Fabric Dataflows and see if it applies as it is ?

In terms of what copy activity allows, if your destination table already exists, and the data you are writing has a column missing, it will be defaulted to null (default value) when writing to destination. If there is a new column, or if a column is not typecastable to the destination type, then this is treated as a bad row, and you can either skip writing this bad row (and log it into a temporary storage to be processed later), or fail the operation (the default).

Anonymous
Not applicable

Is the ADF Mapping Dataflow coming to Fabric ?
We have the same kind of requirement with json files as source, evolving with new attributes, we need to have the schema drift available

Any answer on this? We have the same requirements and need pipelines to be able to handle schema drift as it is under ADF.

ajarora
Microsoft Employee
Microsoft Employee

How do you expect the schema variation to have taken effect ?

There can be many situations possible, column added, column dropped, or column type changed.

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

Dec Fabric Community Survey

We want your feedback!

Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.

ArunFabCon

Microsoft Fabric Community Conference 2025

Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.

December 2024

A Year in Review - December 2024

Find out what content was popular in the Fabric community during 2024.