Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi,
We have created a Lakehouse with Schema support enabled. Then we have developed a notebook to save a pyspark dataframe as a delta table
Solved! Go to Solution.
it should be possible but from the error it seems like a partition mismatch issue, not a problem with schema support itself.
If the output is [], then the table was created without partitions. In that case:
You cannot append with a new partitioning scheme unless you drop and recreate the table.
You need to either:
Remove .partitionBy("MonthKey") when appending or
Drop and recreate the table with the desired partition.
If you are still early in development and can afford to overwrite the table (if possible)
Finaly recommendation to try:
Please 'Kudos' and 'Accept as Solution' if this answered your query.
it should be possible but from the error it seems like a partition mismatch issue, not a problem with schema support itself.
If the output is [], then the table was created without partitions. In that case:
You cannot append with a new partitioning scheme unless you drop and recreate the table.
You need to either:
Remove .partitionBy("MonthKey") when appending or
Drop and recreate the table with the desired partition.
If you are still early in development and can afford to overwrite the table (if possible)
Finaly recommendation to try:
Please 'Kudos' and 'Accept as Solution' if this answered your query.