The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi everyone,
We are working on Fabric Preview and today we are trying to import data from DataFrame in NoteBook (PySpark) into a Table in Lakehouse.
Here is the code we have used:
table_name = "dds_estimated"
spark_df.write.mode("overwrite").format("delta").option("mergeSchema", "true").save(f"Tables/{table_name}")
print(f"Done loading {table_name}")
Then we encountered a problem in the table format and were asked to use the following command to edit the table properties:
spark.sql(f'''
ALTER TABLE dds_estimated SET TBLPROPERTIES (
'delta.columnMapping.mode' = 'name',
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5'
)
''')
Overall, we can run the notebook done but nothing import to our table, it's still blank and have a notice:
Table uses column mapping which is not supported.
What should we do to import DataFrame to Table in Lakehouse?
User | Count |
---|---|
42 | |
15 | |
12 | |
11 | |
8 |
User | Count |
---|---|
50 | |
31 | |
22 | |
17 | |
15 |