Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.

Reply

ErrorCode=DeltaNotSupportedLogicalType

ErrorCode=DeltaNotSupportedLogicalType,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The logical type TimeSpan is not supported in Delta format. Reason: Cannot find supported delta type for column name CREATED_TIME_TM, logical type TimeSpan,Source=Microsoft.DataTransfer.ClientLibrary,'

Hi, 


we are getting the above error when we are trying to load the data from the SQL Server to Lakehouse using Fabric Copy Activity.


We Tried Below method but still getting the same error.

 

Adding the Time Format in the Type Conversion under Mapping Tab

SivaReddy24680_0-1763732869091.png

 

1 ACCEPTED SOLUTION
v-sgandrathi
Community Support
Community Support

Hi @SivaReddy24680,

 

In your case, the main challenge is that Delta Lake does not support the SQL time or .NET TimeSpan logical type, and Fabric checks column data types before any formatting or “treat TimeSpan as string” settings are applied. So, even with the option to handle TimeSpan fields as strings enabled, the pipeline fails because the actual data type is incompatible with Delta. Given you have about 150 tables, manually setting up column-level mappings or casting each TimeSpan column individually isn’t practical. Your approach of using a Lookup activity to dynamically identify and filter out columns ending with _SIMP_TM is effective, as the best solution is to exclude unsupported TimeSpan columns from the source query or cast them to a supported type before the data reaches the Lakehouse sink. By dynamically generating the SELECT list and removing all TimeSpan-typed fields based on the _SIMP_TM naming pattern, the Copy activity only receives supported SQL types and can write to Delta successfully without manual mapping. This method streamlines the process for all tables. If you need the TimeSpan fields in the Lakehouse, you’ll need to cast them to a supported type like VARCHAR in the source query; otherwise, excluding them as you’re doing is the most efficient option.

 

Thank you.

View solution in original post

9 REPLIES 9
v-sgandrathi
Community Support
Community Support

Hi @SivaReddy24680,

 

We wanted to follow up since we haven't heard back from you regarding our last response. We hope your issue has been resolved.

If you need any further assistance, feel free to reach out.

 

Thank you for being a valued member of the Microsoft Fabric Community Forum!

v-sgandrathi
Community Support
Community Support

Hi @SivaReddy24680,

 

Thank you for the latest update and feel free to reach out to us it you face any other issue related to the topic.

Thank you.

v-sgandrathi
Community Support
Community Support

Hi @SivaReddy24680,

 

In your case, the main challenge is that Delta Lake does not support the SQL time or .NET TimeSpan logical type, and Fabric checks column data types before any formatting or “treat TimeSpan as string” settings are applied. So, even with the option to handle TimeSpan fields as strings enabled, the pipeline fails because the actual data type is incompatible with Delta. Given you have about 150 tables, manually setting up column-level mappings or casting each TimeSpan column individually isn’t practical. Your approach of using a Lookup activity to dynamically identify and filter out columns ending with _SIMP_TM is effective, as the best solution is to exclude unsupported TimeSpan columns from the source query or cast them to a supported type before the data reaches the Lakehouse sink. By dynamically generating the SELECT list and removing all TimeSpan-typed fields based on the _SIMP_TM naming pattern, the Copy activity only receives supported SQL types and can write to Delta successfully without manual mapping. This method streamlines the process for all tables. If you need the TimeSpan fields in the Lakehouse, you’ll need to cast them to a supported type like VARCHAR in the source query; otherwise, excluding them as you’re doing is the most efficient option.

 

Thank you.

Hi @v-sgandrathi 

 

Thank you for the explanation and suggestions. We will proceed with the current approach of excluding the _SIMP_TM columns. We will try generating a casting query for these TimeSpan fields in case we need to include their values in the Lakehouse in the future.

v-sgandrathi
Community Support
Community Support

Hi @SivaReddy24680,

 

Thank you @AsgerLB and @tayloramy for your response to the query.

Just wanted to follow up and confirm that everything has been going well on this. Please let me know if there’s anything from our end.
Please feel free to reach out Microsoft fabric community forum.

 

Thank you.

AsgerLB
Advocate I
Advocate I

Hi @SivaReddy24680,

The error happens because Delta Lake (the storage format behind Fabric Lakehouse) does not support a "Time" or "TimeSpan" data type. It only supports Date (YYYY-MM-DD) or Timestamp (YYYY-MM-DD HH:MM:SS).

The reason for why your mapping approach is failing, is because Fabric checks the data type compatibility before it applies those formatting rules. I.e. The "Type conversion" settings shown in the image (TimeSpan format: "hh\:mm\:ss") tell Fabric how to format the value if it were valid not changing the type.

Depending on your situation you can do the following:

Cast the data into a compatible data type inside of your database (the source), or during transit by making a custom query inside of the source configuration tab of the copy activity.

AsgerLB_0-1763737241307.png

 

Alternatively you can fix it via the mapping, as you also tried in your screenshot. Here you need to reimport the schema from the source, find the columns like CREATED_TIME_TM and look at the destination Type (it will most likely come with an error). If you can try and change the data type to TimeStamp or Date. 

AsgerLB_1-1763737594219.png

 

If you can't, try and delete the mapping and recreate it manually for the troubblesome columns with timestamp or date.


Br
Asger

Hi @AsgerLB 

 

Since we have around150 tables, dynamically passing column-level mappings for each table will be a little difficult. Using a CAST/CONVERT in the source query does help avoid the TimeSpan issue, but in our case we are not selecting the TimeSpan fields.
Added a lookup activity before the copy activity which excludes them(TimeSpan Fields) and pass the other fields for the selection.
Since all TimeSpan fields end with _SIMP_TM, we are filtering them out of the selection.

tayloramy
Community Champion
Community Champion

Hi @SivaReddy24680

 

on SQL Server, what data type is CREATED_TIME_TM? 

 

Fabric doesn't support timespan data types. Can you show us a screenshot of the mapping tab so we can see the column mappings? This might be as simple as changing the column type to datetime in the mapping tab. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

Hi @tayloramy 

 

I’m facing with the field CREATED_TIME_TM which is of TimeSpan data type. Although I have enabled the option in the activity settings to treat TimeSpan fields as strings, the pipeline still fails with a conversion error during execution.

 

We have an other option to map the fields and the dataTypes from the Mapping tab, but we have around 150 tables and some of them have the TimeSpan DataType fields.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.