Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now
Given the following scenario
1. Create a table in Databricks with column mapping enabled.
%python
spark.conf.set("spark.databricks.delta.properties.defaults.minWriterVersion", 5)
spark.conf.set("spark.databricks.delta.properties.defaults.minReaderVersion", 2)
spark.conf.set("spark.databricks.delta.properties.defaults.columnMapping.mode", "name")
2. Add shortcut tables from Databricks to Synapse Lakehouse. The data shows correctly.
3. Go to the SQL Endpoint. Tables fail to load. with the error. Corrective Action: Recreate the table without column mapping property.
4. This has a knock-on effect with data failing to feed into the datasets and Power BI.
Is the SQL Endpoint for Lakehouse going to support name column mapping mode?
Thanks
+1 vote for the idea / fix
I get the same error - using Databricks with Unity Catalog enabled. I think Fabric needs to update its Delta version?
Exactly. The Spark Engine (Lakehouse) can read it ok, it's the SQL Endpoint that is not working.
It's also a bit of a pain with Unity using GUID's for its storage layout. Not a massive problem but it would be super nice if Synapse Lakehouse shortcuts could read the Unity Catalog Metadata and we can just click and add tables.
Check out the October 2024 Fabric update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
3 | |
2 | |
1 | |
1 | |
1 |