Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I am looking for proper way to migrate one time migrate our synapse analytics system to fabric warehouse, I have migrated all our schema using as per Microsoft blogs, but when I tried to directly migrate the synapse tables to warehouse (since its a one time migration we dont want nothing fancy), we end with various data truncation issues on various tables, for example one of the table had same schema at source and destination but still it give us data might be truncated error, after further analysis we came to know that its due to the difference in collation around synapse and warehouse system that cause this, we have to manually change the data length for all varchar char and nvarchar data types.
Is there a way or a tool that can provide correct datalengths when we deploy the schema to the warehouse and then we can try the migration, we have over 200+ tables here, and we cannot do manually changing all the nvarchar,char,varchar data lengths. Do we have any kind of tool that can provide updated data length like Azure Data Migration Assistant or Azure Data Explorer that can help us in this?
Please do suggest any blogs or post by Microsoft that can help us out here?
Hi @AnmolGan81 ,
Have you tried exporting dacpac and deploy to warehouse?Is it causing the same issue
Regards,
Srisakthi
Both are different services and support diff aspects. Like Nvarchar is not supported in Fabric warehouse.
So DACPAC is not the way @Srisakthi
@NandanHegde Yeah it will not convert the data types.
There is a stored procedure way to migrate which is given in the document. Is this way also causing the issue?
Regards,
Srisakthi
Hi thanks for sharing the documentation, we tried this actually the problem with this method is that it straight away increases length of nvarchar character to varchar(8000) when it migrates the scheam even if we have a datatype as nvarchar(5) it moves to varchar(8000), that is not something that we are lookin forward to.
Hi @AnmolGan81 ,
Would you be able to raise this as an Idea on the Microsoft Fabric Ideas forum ? This helps the product team track your requirement and consider adding a tool or process to handle column length migrations more gracefully.
If so, sharing the link here would be helpful for other community members who may have similar feedback.
If we don’t hear back, we’ll go ahead and close this thread. For any further discussions or questions, please start a new thread in the Microsoft Fabric Community Forum we’ll be happy to assist.
Thank you for being part of the Microsoft Fabric Community.
Thanks for your response will surely share this idea, since its a must have feature that should function properly when doing assesment before the data migration.
Hi @AnmolGan81 ,
Thank you for the update! Once you have posted the idea in the ideas forum, please share the link here. This will allow other members experiencing the same issue to upvote it, helping the product team prioritize bringing this feature.
Thank you for your undersatnding!
Have posted this an Idea please vote for this, thanks for all the help.
Hi @AnmolGan81 ,
Thank you for sharing the link to your posted idea!
We ensure it is visible to other members who can upvote to support its prioritization. We appreciate your contribution to improving our product!
Please accept the helpful response or your answer as solution, so other can easily find it.
Thank you.
Hi @AnmolGan81 ,
Thank you for reaching out to us on Microsoft Fabric Community Forum. Also, thanks @NandanHegde , for those valuable insights on this thread. As mentioned by NandanHegde currently, there is no direct tool from Microsoft (like Data Migration Assistant or ADX) that performs this for you in a fully automated way during your migration.
Additionally, I would suggest posting this as an idea on Microsoft Fabric Ideas. Providing details about your scenario and the problems you are facing might help make a strong case for adding this as a future feature.
Hope this helps.If so,give us kudos and consider accepting it a solution.
Regards,
Pallavi.
Unfortunately , this is the only way for migration rather than developing everything from scratch:
https://learn.microsoft.com/en-us/fabric/data-warehouse/migration-assistant
And there is no other alternative and in such scenarios, one has to manually deduce the lengths and make changes accordingly.
You can provide a feedback to MSFT Fabric team on the same for further enhancement
Well in that case its shame that we need to deduce data length for more then 400+ objects manually, its just like developing everything again from scratch, if we want to setup data length manually for all string related datatypes.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
5 | |
3 | |
3 | |
2 | |
2 |