Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi everyone,
We are working on Fabric Preview and today we are trying to import data from DataFrame in NoteBook (PySpark) into a Table in Lakehouse.
Here is the code we have used:
table_name = "dds_estimated"
spark_df.write.mode("overwrite").format("delta").option("mergeSchema", "true").save(f"Tables/{table_name}")
print(f"Done loading {table_name}")
Then we encountered a problem in the table format and were asked to use the following command to edit the table properties:
spark.sql(f'''
ALTER TABLE dds_estimated SET TBLPROPERTIES (
'delta.columnMapping.mode' = 'name',
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5'
)
''')
Overall, we can run the notebook done but nothing import to our table, it's still blank and have a notice:
Table uses column mapping which is not supported.
What should we do to import DataFrame to Table in Lakehouse?
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!