Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi everyone,
I'm facing a challenging issue while loading Delta tables into a Microsoft Fabric Warehouse, and I’m hoping to get expert insights on whether this is expected behavior, a modeling limitation, or a bug.
🟦 Scenario
I have several Delta/Parquet tables in a Lakehouse (Bronze layer) that were ingested from an external system.
These tables contain schema variations such as:
Columns evolving from STRING → INTEGER
Nullability changes
Schema drift handled automatically by Delta
Columns with mixed types (e.g., "123", "0045", "A23")
I’m now trying to load these Delta tables into a Fabric Warehouse (SQL endpoint) using:
Dataflow Gen2
Pipelines (Copy activity)
Direct “Load to Warehouse” from Lakehouse
🟥 Problem
Every ingestion attempt fails with type conflicts, especially when the source Delta column contains inconsistent types.
Examples of errors:
Cannot convert value "A23" to INT
Column type STRING is not compatible with target type NVARCHAR(50)
Detected column type mismatch across Delta files: INT vs STRING
Even worse:
Fabric Warehouse auto-detection sometimes sets the destination to INT, even if only 90% of values are integers and 10% are alphanumeric.
This prevents ingestion unless we manually sanitize all the Bronze data first — which defeats the purpose of schema-on-read.
🟩 What I’ve tried
Setting target columns to NVARCHAR(MAX) → still fails if some Delta parquet files infer a conflicting type
Using a custom Dataflow schema to override types → ingestion still fails at warehouse-level
Creating a staging Warehouse with relaxed types → same behavior
Disabling schema validation (not available)
Manually coercing types in Lakehouse using PySpark before ingestion → works, but is expensive and not scalable
❓ Questions
Does Fabric Warehouse require strictly consistent column types across all Delta files, even when the destination column is text?
Is there a recommended pattern for handling schema drift or mixed types when loading Lakehouse → Warehouse?
Is this limitation planned to be improved, considering most Bronze layers contain semi-structured or inconsistent data?
Is the only reliable workaround coercing and cleaning data upstream before Warehouse ingestion?
Has anyone successfully automated a safe cast pipeline for this scenario without manual intervention?
Any insights would be extremely appreciated — this directly affects our Bronze → Silver → DWH pipeline design.
Thanks in advance!
Solved! Go to Solution.
Hi @SavioFerraz,
Instead of NVARCHAR, try just VARCHAR.
NVARCHAR is not supported. Data Types in Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @SavioFerraz,
Thank you for posting your query in the Microsoft Fabric Community Forum, and thanks to @tayloramy for sharing valuable insights.
Could you please confirm if your query has been resolved by the provided solutions? This would be helpful for other members who may encounter similar issues.
Thank you for being part of the Microsoft Fabric Community.
Hi @SavioFerraz,
Instead of NVARCHAR, try just VARCHAR.
NVARCHAR is not supported. Data Types in Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.