The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi,
We have created a Lakehouse with Schema support enabled. Then we have developed a notebook to save a pyspark dataframe as a delta table
Solved! Go to Solution.
it should be possible but from the error it seems like a partition mismatch issue, not a problem with schema support itself.
If the output is [], then the table was created without partitions. In that case:
You cannot append with a new partitioning scheme unless you drop and recreate the table.
You need to either:
Remove .partitionBy("MonthKey") when appending or
Drop and recreate the table with the desired partition.
If you are still early in development and can afford to overwrite the table (if possible)
Finaly recommendation to try:
Please 'Kudos' and 'Accept as Solution' if this answered your query.
it should be possible but from the error it seems like a partition mismatch issue, not a problem with schema support itself.
If the output is [], then the table was created without partitions. In that case:
You cannot append with a new partitioning scheme unless you drop and recreate the table.
You need to either:
Remove .partitionBy("MonthKey") when appending or
Drop and recreate the table with the desired partition.
If you are still early in development and can afford to overwrite the table (if possible)
Finaly recommendation to try:
Please 'Kudos' and 'Accept as Solution' if this answered your query.
User | Count |
---|---|
5 | |
4 | |
3 | |
2 | |
1 |
User | Count |
---|---|
16 | |
15 | |
11 | |
6 | |
6 |