Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Given the following scenario
1. Create a table in Databricks with column mapping enabled.
%python
spark.conf.set("spark.databricks.delta.properties.defaults.minWriterVersion", 5)
spark.conf.set("spark.databricks.delta.properties.defaults.minReaderVersion", 2)
spark.conf.set("spark.databricks.delta.properties.defaults.columnMapping.mode", "name")
2. Add shortcut tables from Databricks to Synapse Lakehouse. The data shows correctly.
3. Go to the SQL Endpoint. Tables fail to load. with the error. Corrective Action: Recreate the table without column mapping property.
4. This has a knock-on effect with data failing to feed into the datasets and Power BI.
Is the SQL Endpoint for Lakehouse going to support name column mapping mode?
Thanks
+1 vote for the idea / fix
I get the same error - using Databricks with Unity Catalog enabled. I think Fabric needs to update its Delta version?
Exactly. The Spark Engine (Lakehouse) can read it ok, it's the SQL Endpoint that is not working.
It's also a bit of a pain with Unity using GUID's for its storage layout. Not a massive problem but it would be super nice if Synapse Lakehouse shortcuts could read the Unity Catalog Metadata and we can just click and add tables.
User | Count |
---|---|
33 | |
14 | |
6 | |
3 | |
2 |
User | Count |
---|---|
39 | |
22 | |
11 | |
7 | |
6 |