Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Given the following scenario
1. Create a table in Databricks with column mapping enabled.
%python
spark.conf.set("spark.databricks.delta.properties.defaults.minWriterVersion", 5)
spark.conf.set("spark.databricks.delta.properties.defaults.minReaderVersion", 2)
spark.conf.set("spark.databricks.delta.properties.defaults.columnMapping.mode", "name")
2. Add shortcut tables from Databricks to Synapse Lakehouse. The data shows correctly.
3. Go to the SQL Endpoint. Tables fail to load. with the error. Corrective Action: Recreate the table without column mapping property.
4. This has a knock-on effect with data failing to feed into the datasets and Power BI.
Is the SQL Endpoint for Lakehouse going to support name column mapping mode?
Thanks
+1 vote for the idea / fix
I get the same error - using Databricks with Unity Catalog enabled. I think Fabric needs to update its Delta version?
Exactly. The Spark Engine (Lakehouse) can read it ok, it's the SQL Endpoint that is not working.
It's also a bit of a pain with Unity using GUID's for its storage layout. Not a massive problem but it would be super nice if Synapse Lakehouse shortcuts could read the Unity Catalog Metadata and we can just click and add tables.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.