Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreJoin the FabCon + SQLCon recap series. Up next: Power BI, Real-Time Intelligence, IQ and AI, and Data Factory take center stage. All sessions are available on-demand after the live show. Register now
Given the following scenario
1. Create a table in Databricks with column mapping enabled.
%python
spark.conf.set("spark.databricks.delta.properties.defaults.minWriterVersion", 5)
spark.conf.set("spark.databricks.delta.properties.defaults.minReaderVersion", 2)
spark.conf.set("spark.databricks.delta.properties.defaults.columnMapping.mode", "name")
2. Add shortcut tables from Databricks to Synapse Lakehouse. The data shows correctly.
3. Go to the SQL Endpoint. Tables fail to load. with the error. Corrective Action: Recreate the table without column mapping property.
4. This has a knock-on effect with data failing to feed into the datasets and Power BI.
Is the SQL Endpoint for Lakehouse going to support name column mapping mode?
Thanks
+1 vote for the idea / fix
I get the same error - using Databricks with Unity Catalog enabled. I think Fabric needs to update its Delta version?
Exactly. The Spark Engine (Lakehouse) can read it ok, it's the SQL Endpoint that is not working.
It's also a bit of a pain with Unity using GUID's for its storage layout. Not a massive problem but it would be super nice if Synapse Lakehouse shortcuts could read the Unity Catalog Metadata and we can just click and add tables.
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
| User | Count |
|---|---|
| 30 | |
| 22 | |
| 14 | |
| 12 | |
| 11 |