Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hi,
I've dropped a column from a lakehouse table using this Spark SQL code:
%%sql
ALTER TABLE myTable SET TBLPROPERTIES (
'delta.columnMapping.mode' = 'name',
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5'
)%%sql
ALTER TABLE myTable
DROP COLUMN myColumn
After refreshing the lakehouse, I don't see the dropped column in the Explorer rightly but in the SQL analytics endpoint I continue to see the column also after more lakehouse and table refreshes. Also in SSMS I see still the dropped columns.
This is an inconsistent behaviour.
Any suggests to me, please? Thanks
Solved! Go to Solution.
Hi @pmscorca ,
First, please confirm that your SQL endpoint and the lakehouse connected to SSMS are connected correctly. One possible reason is that your lakehouse is connected to the wrong one.
In SSMS, verify that the connection string is for the Lakehouse that contains the table from which you have deleted the column.
If the connection is confirmed to be correct, please follow the steps below:
Sometimes you need to refresh the metadata. You can try running the following command to refresh the table metadata:
REFRESH TABLE myTable
You can use the following methods to clear the cache:
CLEAR CACHE
Try these methods.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
When you enable column mapping, the sync between Spark and SQL Analytics Endpoint seems to stop working.
The way I was able to drop columns and still have the SQL Analytics Endpoint working, was by doing this code:
spark.read.table("tableName")\
.drop("columName")\
.write\
.mode("overwrite")\
.option("overwriteSchema", "true")\
.saveAsTable("tableName")
Here I did not enable column mapping.
At least this worked for me when I tried it a couple of months ago.
Hi @pmscorca ,
First, please confirm that your SQL endpoint and the lakehouse connected to SSMS are connected correctly. One possible reason is that your lakehouse is connected to the wrong one.
In SSMS, verify that the connection string is for the Lakehouse that contains the table from which you have deleted the column.
If the connection is confirmed to be correct, please follow the steps below:
Sometimes you need to refresh the metadata. You can try running the following command to refresh the table metadata:
REFRESH TABLE myTable
You can use the following methods to clear the cache:
CLEAR CACHE
Try these methods.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi,
I've already tried doing refresh actions against the lakehouse and the table both in the Explorer after activating the SQL analytics endpoint view and in the SSMS tool unsuccessfully.
I've tried to run the REFRESH TABLE and the CLEAR CACHE command; it is important to say that these are Spark SQL commands.
Using the CLEAR CACHE command I've solved and so also the SQL analytics endpoint shows a coherent situation, but I think that there is a little bug about the DROP COLUMN statement.
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!