Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
AnmolGan81
Helper I
Helper I

TIME Datatype issue while migrating from Synapse to FabricWarehouse

I am encountring an issue while migrating the data from synapse dedicated pool to fabric warehouse, one of my table contains a Time(2) dataype at source and destination, when we try to migrate it using Fabric Data Pipeline we get an issue related to "TIME is not compatible with external data type 'Parquet physical type: INT64, please try with BIGINT'", when I try convert it at source to something like varchar(20), I then get an issue of data conversion.

 

Does anyone faced any issue while performing migration whats the best option here to migrate time related data obejcts?

8 REPLIES 8
v-mdharahman
Community Support
Community Support

Hi @AnmolGan81,

Thanks for reaching out to the Microsoft fabric community forum and for sharing the details. Yes, this is a known friction point when working with time datatypes in Fabric pipelines, especially when moving data through Parquet-based internals. To resolve this, here's what usually works:

* Convert the TIME(2) column to VARCHAR(8) or VARCHAR(12) at the source before pushing through the pipeline. Use a consistent format like 'HH:MM:SS' or 'HH:MM:SS.FF' depending on your precision needs. For example, in your SELECT query from Synapse "CONVERT(VARCHAR(12), your_time_column, 114) AS formatted_time". This format (114) gives time in hh:mi:ss:mmm which avoids conversion issues.

* At the destination (Fabric Warehouse), either keep it as VARCHAR if the time is used for display/logging, or convert it back to TIME(2) using a computed column or post-processing step.

* If your downstream model needs the actual time type, consider landing the data as VARCHAR first, then running a SQL statement in Fabric to transform it back.

ALTER TABLE your_table
ADD new_time_column AS CAST(formatted_time AS TIME(2))

This approach keeps the pipeline flowing without breaking due to type mismatch, and allows flexibility to map it back to the intended type at the destination.

 

If I misunderstand your needs or you still have problems on it, please feel free to let us know.  

Best Regards,
Hammad.
Community Support Team

Hi thank you for your response I have tried below:

Convert the TIME(2) column to VARCHAR(8) or VARCHAR(12) at the source before pushing through the pipeline. Use a consistent format like 'HH:MM:SS' or 'HH:MM:SS.FF' depending on your precision needs. For example, in your SELECT query from Synapse "CONVERT(VARCHAR(12), your_time_column, 114) AS formatted_time". This format (114) gives time in hh:mi:ss:mmm which avoids conversion issues.

 

So I tried above in copy activity itself at source side, but its not able to convert I get the same error as I mentioned before, can you elaborate more on your answer ?

Hi @AnmolGan81,

Even though you're applying the CONVERT in the source query of the Copy Activity, the pipeline still tries to infer the schema from the underlying table, not just from the SELECT output especially when working with Synapse and Parquet-based movement. This is why the TIME column is still being interpreted as TIME, causing the same error. Here’s the correct and working approach:

* Use a View to Force the Conversion at the Source Layer Go to Synapse SQL Dedicated Pool and create a view like this:

CREATE VIEW vw_your_table_migration AS
SELECT
col1,
col2,
CONVERT(VARCHAR(12), your_time_column, 114) AS formatted_time
FROM your_original_table;
* In the Fabric Data Pipeline, set your source to this view, not the original table.

* In the Copy Activity, go to the mapping tab and map formatted_time -> VARCHAR(12) column in your Fabric Warehouse.

* If you eventually need to convert it back to TIME(2) at the destination: 

ALTER TABLE your_table ADD converted_time_column TIME(2);
UPDATE your_table
SET converted_time_column = TRY_CAST(formatted_time AS TIME(2));


This works because using a view ensures the data type is enforced at the SQL engine level before the pipeline picks it up. This prevents the pipeline from "seeing" the original TIME type, which avoids the Parquet compatibility issue. Casting inside the Copy Activity query alone doesn’t override the underlying metadata the view helps you bypass that.

 

Best Regards,

Hammad.

 

 

It  also works if I changed it at the source query in the copy activity insted of creating any view. In both of these cases I need to change datatype at source, which is something that we need to check if its feasible from our end to do it. I was wondering if there are any other ways to do it without changing the datatype during the migration, what if we need to build sync process on daily basis, we cannot convert the datatype at source and change it back each time after the migration. Are there any better solutions? Datatype change is our last option which we are still exploring.

Hi @AnmolGan81,

Thanks for the clarification and following up with the conversation and that makes complete sense especially if you're aiming for an automated daily sync process. Modifying the source datatype each time (even virtually through a view or a query cast) isn’t ideal for operational stability.

To answer your question, there’s currently no way to natively migrate TIME(n) from Synapse Dedicated Pool to Fabric Warehouse using Data Pipelines without transforming it, because Fabric pipelines internally use Parquet as the intermediate format. Also parquet doesn't have a direct representation for SQL TIME(n), it maps it to INT64 with logical annotations, and that causes schema mismatches when the destination expects a TIME type in SQL.

So even if the column is technically TIME(2) at source and destination, the Parquet layer in the middle breaks the compatibility.

 

You can check few possible alternatives which can help you with your case:
* Intermediate Landing Table with VARCHAR (One-Time Setup). Instead of casting at source every time have your pipeline land the data into a staging table in Fabric where the time column is just VARCHAR(12). Then run a post-copy SQL statement (as a second activity) to insert into your final table where the value is cast back to TIME(2).

This avoids touching your source and logic sits entirely in Fabric and this only needs to be configured once.

* If your project allows using Dataflow Gen2 instead of Pipelines for this table. It loads data via native Power Query transformations. You can handle time column formatting more gracefully and avoid low-level Parquet serialization issues.

If I misunderstand your needs or you still have problems on it, please feel free to let us know.  

Best Regards,
Hammad.


Hi @AnmolGan81,

As we haven’t heard back from you, so just following up to our previous message. I'd like to confirm if you've successfully resolved this issue or if you need further help.

If yes, you are welcome to share your workaround and mark it as a solution so that other users can benefit as well. If you find a reply particularly helpful to you, you can also mark it as a solution.
If so, it would be really helpful for the community if you could mark the answer that helped you the most. If you're still looking for guidance, feel free to give us an update, we’re here for you.

 

Best Regards,

Hammad.

Hi @AnmolGan81,
I just wanted to follow up on your thread. If the issue is resolved, it would be great if you could mark the solution so other community members facing similar issues can benefit too. If not, don’t hesitate to reach out, we’re happy to keep working with you on this. 

 

Best Regards,

Hammad.

Hi @AnmolGan81,
We noticed there hasn’t been any recent activity on this thread. If your issue is resolved, marking the correct reply as a solution would be a big help to other community members. If you still need support, just reply here and we’ll pick it up from where we left off.

 

Best Regards,

Hammad.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.